server

Results 1 - 25 of 2223Sort Results By: Published Date | Title | Company Name
By: NetApp     Published Date: Dec 13, 2013
FlexPod Select with Hadoop delivers enterprise class Hadoop with validated, pre-configured components for fast deployment, higher reliability and smoother integration with existing applications and infrastructure.  These technical reference architectures optimize storage, networking, and servers with Cloudera and Hortonworks distributions of Hadoop. Leverage FlexPod Select with Hadoop to help store, manage, process and perform advanced analytics on your multi-structured data.   Tuning parameters, optimization techniques among other Hadoop cluster guidelines  are provided.
Tags : flexpod with hadoop, enterprise data, storage infrastructure, massive amounts of data
     NetApp
By: IBM     Published Date: Sep 02, 2014
Life Sciences organizations need to be able to build IT infrastructures that are dynamic, scalable, easy to deploy and manage, with simplified provisioning, high performance, high utilization and able to exploit both data intensive and server intensive workloads, including Hadop MapReduce. Solutions must scale, both in terms of processing and storage, in order to better serve the institution long-term. There is a life cycle management of data, and making it useable for mainstream analyses and applications is an important aspect in system design. This presentation will describe IT requirements and how Technical Computing solutions from IBM and Platform Computing will address these challenges and deliver greater ROI and accelerated time to results for Life Sciences.
Tags : 
     IBM
By: Splice Machine     Published Date: Feb 13, 2014
Hadoop: Moving Beyond the Big Data Hype Let’s face it. There is a lot of hype surrounding Big Data and adoop, the defacto Big Data technology platform. Companies want to mine and act on massive data sets, or Big Data, to unlock insights that can help them improve operational efficiency, delight customers, and leapfrog their competition. Hadoop has become popular to store massive data sets because it can distribute them across inexpensive commodity servers. Hadoop is fundamentally a file system (HDFS or Hadoop Distributed File System) with a specialized programming model (MapReduce) to process the data in the files. Big Data has not lived up to expectations so far, partly because of limitations of Hadoop as a technology.
Tags : sql-on-hadoop® evaluation guide, splice machine, adoop
     Splice Machine
By: RYFT     Published Date: Apr 03, 2015
The new Ryft ONE platform is a scalable 1U device that addresses a major need in the fast-growing market for advanced analytics — avoiding I/O bottlenecks that can seriously impede analytics performance on today's hyperscale cluster systems. The Ryft ONE platform is designed for easy integration into existing cluster and other server environments, where it functions as a dedicated, high-performance analytics engine. IDC believes that the new Ryft ONE platform is well positioned to exploit the rapid growth we predict for the high-performance data analysis market.
Tags : ryft, ryft one platform, 1u deivce, advanced analytics, avoiding i/o bottlenecks, idc
     RYFT
By: IBM     Published Date: Nov 14, 2014
Platform Symphony is an enterprise-class server platform that delivers low-latency, scaled-out MapReduce workloads. It supports multiple applications running concurrently so that organizations can increase utilization across all resources resulting in a high return on investment.
Tags : 
     IBM
By: Dell and Intel®     Published Date: Apr 02, 2015
In this Guide we have delivered the case for the benefits of big data technology applied to the needs of the manufacturing industry. In demonstrating the value of big data, we included: • An overview of how manufacturing can benefit from the big data technology stack • An overview of how manufacturing can benefit from the big data technology stack • A high-level view of common big data pain points for manufacturers • A detailed analysis of big data technology for manufacturers • A view as to how manufacturers are going about big data adoption • A proven case study with: Omneo • Dell PowerEdge servers with Intel® Xeon® processors
Tags : dell, intel, big data, manufacturing, technology stack, pain points, big data adoption, omneo
     Dell and Intel®
By: GridGain     Published Date: Mar 10, 2015
Software as a Service (SaaS) is a software distribution model in which applications are hosted by a vendor or service provider and made available to customers over the Internet. Instead of companies installing software on their own servers, known as the on premises distribution model, application software providers host the software in the cloud and charge customers according to the time they spend using the software, or based on a monthly or annual fee. SaaS is becoming increasingly popular, and as the industry develops, more and more companies are dropping older business models in favor of this rapidly evolving methodology.
Tags : gridgain, saas, saas perfomance and scalability, in memory computing, data fabric, paas for saas, data grid, real-time streaming, hadoop
     GridGain
By: Adaptive Computing     Published Date: Feb 27, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Hewlett Packard Enterprise     Published Date: Apr 20, 2018
For midsize firms around the world with 100 to 999 employees, advanced technology plays an increasingly important role in business success. Companies have been adding cloud resources to supplement on-premise server, storage, and networking capabilities. At the same time, growth of mobile and remote workers is also changing how companies need to support workers to allow them to be as productive as possible.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Apr 20, 2018
In an innovation-powered economy, ideas need to travel at the speed of thought. Yet even as our ability to communicate across companies and time zones grows rapidly, people remain frustrated by downtime and unanticipated delays across the increasingly complex grid of cloud-based infrastructure, data networks, storage systems, and servers that power our work.
Tags : 
     Hewlett Packard Enterprise
By: Dell PC Lifecycle     Published Date: May 15, 2018
Windows Server 2016 is an important release in enabling IT to deliver on the promise of the third platform. It provides a path to a seamless, integrated cloud environment—incorporating public, private and hybrid models—with the software-defined data center as the hub. In migrating to this next-generation data center model, it is essential that IT leaders choose the right partner for the compute platform, as well as storage, networking and systems management
Tags : 
     Dell PC Lifecycle
By: IBM     Published Date: Jun 13, 2018
Today’s workloads are dynamic and power-hungry. Cloud requirements for mission-critical workloads often change overnight – causing IT priorities to shift, deadlines to tighten, and budgets to shrink. Seconds matter, especially when it comes to the bottom line. And bottom lines take brutal hits when companies haven’t properly established a powerful and flexible infrastructure.
Tags : 
     IBM
By: Mitel     Published Date: Apr 25, 2018
In businesses and organizations across the world, rooms once humming with wires, black boxes and blinking lights now sit empty. In a lonely phone closet, there’s only dust, a single lonely terminal, or perhaps a foosball table where IT pros can let off steam. The cloud – and more specifically, cloud communications – is the source of the transformation, which has fundamentally changed the IT landscape. Some IT pros have embraced it. Some are working on migrating to the cloud over time. And others, through choice or necessity, are sticking with a premises-based approach. Read on to find out why some IT pros are shopping for game tables to go in empty server rooms while others are untangling wires and watching blinking lights—and which cases make the most sense for each approach.
Tags : cloud, communications, organizations, landscape
     Mitel
By: Cohesity     Published Date: Apr 24, 2018
In today’s application driven, cloud agnostic world, many organizations still rely on legacy, multiple point backup and recovery products that consist of backup software, target storage, media and master servers, and cloud gateways, all from different vendors. With slow backups and long recovery times of their existing backup products, organizations are unable to meet their tight business SLAs. Constant forklift upgrades and complex management UIs make it expensive to manage the current backup and recovery environments.
Tags : 
     Cohesity
By: Cohesity     Published Date: May 04, 2018
Cohesity provides the only hyper-converged platform that eliminates the complexity of traditional data protection solutions by unifying your end-to-end data protection infrastructure – including target storage, backup, replication, disaster recovery, and cloud tiering. Cohesity DataPlatform provides scale-out, globally deduped, highly available storage to consolidate all your secondary data, including backups, files, and test / dev copies. Cohesity also provides Cohesity DataProtect, a complete backup and recovery solution fully converged with Cohesity DataPlatform. It simplifies backup infrastructure and eliminates the need to run separate backup software, proxies, media servers, and replication. This paper specifically focuses on the business and technical benefits of Cohesity DataPlatform for the data protection use case. It is intended for IT professionals interested in learning more about Cohesity’s technology differentiation and advantages it offers for data protection - (i) Elim
Tags : 
     Cohesity
By: Cohesity     Published Date: May 04, 2018
In today’s application driven, cloud agnostic world, many organizations still rely on legacy, multiple point backup and recovery products that consist of backup software, target storage, media and master servers, and cloud gateways, all from different vendors. With slow backups and long recovery times of their existing backup products, organizations are unable to meet their tight business SLAs. Constant forklift upgrades and complex management UIs make it expensive to manage the current backup and recovery environments. There has to be a simpler, more cost-effective way to keep your data safe. This is the problem that Cohesity is designed to solve, with an entirely new approach to data protection.
Tags : 
     Cohesity
By: Microsoft Azure     Published Date: Apr 05, 2018
If you’re running a mixed operating system environment, you already know that choice is key. Whether on Microsoft Windows Server or Linux, in virtual machines or containers, enterprises are expanding their deployment options. By bringing Microsoft SQL Server to Linux, Microsoft continues to embrace open source solutions.
Tags : 
     Microsoft Azure
By: Microsoft Azure     Published Date: Apr 05, 2018
You’re running a lot of your business on Windows Server today— mission-critical apps, Active Directory, Domain Name Servers, not to mention virtual machines and storage. For more than 20 years, in fact, Windows Server has been the operating system of choice for enterprise workloads.
Tags : 
     Microsoft Azure
By: CA Technologies EMEA     Published Date: May 29, 2018
In time, containers will be the means by which all workloads are deployed on server platforms. It makes too much sense. Constructing fake machines around virtual workloads, just to make them portable across servers, was not the architecturally rational thing to do. It was the expedient thing to do, because cloud platforms had not yet evolved to where they needed to be. This book presents a snapshot of the emerging approaches to container monitoring and distributed systems management that engineers and their customers are building together.
Tags : 
     CA Technologies EMEA
By: Coyote Point Systems     Published Date: Aug 25, 2010
Application Delivery Controllers understand applications and optimize server performance - offloading compute-intensive tasks that prevent servers from quickly delivering applications. Learn how ADCs have taken over where load balancers left off.
Tags : coyote point, systems, slbs, server load balancers, adc, adcs, ssl, load balancing, load balancer, application delivery, application delivery controller, application delivery network, ssl acceleration, ssl offloading, server offload, server offloading, server acceleration, content switch, content switching.
     Coyote Point Systems
By: Coyote Point Systems     Published Date: Sep 07, 2010
If your organization's servers run applications that are critical to your business, chances are that you'd benefit from an application delivery solution. Today's Web applications can be delivered to users anywhere in the world and the devices used to access Web applications have become quite diverse.
Tags : coyote point, adc, buyer's guide, web applications, server load, server hardware, networking
     Coyote Point Systems
By: Coyote Point Systems     Published Date: Sep 02, 2010
At a projected market of over $4B by 2010 (Goldman Sachs), virtualizationhas firmly established itself as one of the most importanttrends in Information Technology. Virtualization is expectedto have a broad influence on the way IT manages infrastructure.Major areas of impact include capital expenditure and ongoingcosts, application deployment, green computing, and storage.
Tags : coyote point, adc, buyer's guide, web applications, server load, server hardware, networking, virtualized network, vlb advanced
     Coyote Point Systems
By: Coyote Point Systems     Published Date: Sep 02, 2010
In this paper, we'll explore how to use Coyote Point's Envoy to ensure 24x7 availability and fast connections for web content deployed at more than one geographiclocation.
Tags : coyote point, adc, buyer's guide, web applications, server load, server hardware, networking, virtualized network, vlb advanced, load balancing
     Coyote Point Systems
By: Coyote Point Systems     Published Date: Sep 02, 2010
The idea of load balancing is well defined in the IT world: A network device accepts traffic on behalf ofa group of servers, and distributes that traffic according to load balancing algorithms and the availabilityof the services that the servers provide. From network administrators to server administrators to applicationdevelopers, this is a generally well understood concept.
Tags : coyote point, adc, buyer's guide, web applications, server load, server hardware, networking, virtualized network, vlb advanced, load balancing
     Coyote Point Systems
By: NEC     Published Date: Jan 28, 2010
Case Study: NEC provided intuitive and user friendly technology to enrich the whole guest experience with improved service times, personalized assistance and quality communications.
Tags : nec, peninsula shanghai, hotel, network solutions, real-time communication, productivity, ip telephony server, wireless technology, unified communications
     NEC
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideBIGDATA White Paper Library contact: Kevin@insideHPC.com