scale

Results 1 - 25 of 1351Sort Results By: Published Date | Title | Company Name
By: IBM     Published Date: Sep 02, 2014
Life Sciences organizations need to be able to build IT infrastructures that are dynamic, scalable, easy to deploy and manage, with simplified provisioning, high performance, high utilization and able to exploit both data intensive and server intensive workloads, including Hadop MapReduce. Solutions must scale, both in terms of processing and storage, in order to better serve the institution long-term. There is a life cycle management of data, and making it useable for mainstream analyses and applications is an important aspect in system design. This presentation will describe IT requirements and how Technical Computing solutions from IBM and Platform Computing will address these challenges and deliver greater ROI and accelerated time to results for Life Sciences.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Every day, the world creates 2.5 quintillion bytes of data and businesses are realizing tangible results from investments in big data analytics. IBM Spectrum Scale (GPFS) offers an enterprise class alternative to Hadoop Distributed File System (HDFS) for building big data platforms and provides a range of enterprise-class data management features. Spectrum Scale can be deployed independently or with IBM’s big data platform, consisting of IBM InfoSphere® BigInsights™ and IBM Platform™ Symphony. This document describes best practices for deploying Spectrum Scale in such environments to help ensure optimal performance and reliability.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
According to our global study of more than 800 cloud decision makers and users are becoming increasingly focused on the business value cloud provides. Cloud is integral to mobile, social and analytics initiatives – and the big data management challenge that often comes with them and it helps power the entire suite of game-changing technologies. Enterprises can aim higher when these deployments are riding on the cloud. Mobile, analytics and social implementations can be bigger, bolder and drive greater impact when backed by scalable infrastructure. In addition to scale, cloud can provide integration, gluing the individual technologies into more cohesive solutions. Learn how companies are gaining a competitive advanatge with cloud computing.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
There is a lot of hype around the potential of big data and organizations are hoping to achieve new innovations in products and services with big data and analytics driving more concrete insights about their customers and their own business operations. To meet these challenges, IBM has introduced IBM® Spectrum Scale™. The new IBM Spectrum Scale storage platform has grown from GPFS, which entered the market in 1998. Clearly, IBM has put significant development into developing this mature platform. Spectrum Scale addresses the key requirements of big data storage - extreme scalability for growth, reduced overhead of data movement, easy accessibility , geographic location independence and advanced storage functionality. Read the paper to learn more!
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
The IBM Spectrum Scale solution provided for up to 11x better throughput results than EMC Isilon for Spectrum Protect (TSM) workloads. Using published data, Edison compared a solution comprised of EMC® Isilon® against an IBM® Spectrum Scale™ solution. (IBM Spectrum Scale was formerly IBM® General Parallel File System™ or IBM® GPFS™, also known as code name Elastic Storage). For both solutions, IBM® Spectrum Protect™ (formerly IBM Tivoli® Storage Manager or IBM® TSM®) is used as a common workload performing the backups to target storage systems evaluated.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
A fast, simple, scalable and complete storage solution for today’s data-intensive enterprise IBM Spectrum Scale is used extensively across industries worldwide. Spectrum Scale simplifies data management with integrated tools designed to help organizations manage petabytes of data and billions of files—as well as control the cost of managing these ever-growing data volumes.
Tags : 
     IBM
By: Storiant     Published Date: Mar 16, 2015
Read this new IDC Report about how today's enterprise datacenters are dealing with new challenges that are far more demanding than ever before. Foremost is the exponential growth of data, most of it unstructured data. Big data and analytics implementations are also quickly becoming a strategic priority in many enterprises, demanding online access to more data, which is retained for longer periods of time. Legacy storage solutions with fixed design characteristics and a cost structure that doesn't scale are proving to be ill-suited for these new needs. This Technology Spotlight examines the issues that are driving organizations to replace older archive and backup-and-restore systems with business continuity and always-available solutions that can scale to handle extreme data growth while leveraging a cloudbased pricing model. The report also looks at the role of Storiant and its long-term storage services solution in the strategically important long-term storage market.
Tags : storiant, big data, analytics implementations, cloudbased pricing model, long-term storage services solution, long-term storage market
     Storiant
By: RYFT     Published Date: Apr 03, 2015
The new Ryft ONE platform is a scalable 1U device that addresses a major need in the fast-growing market for advanced analytics — avoiding I/O bottlenecks that can seriously impede analytics performance on today's hyperscale cluster systems. The Ryft ONE platform is designed for easy integration into existing cluster and other server environments, where it functions as a dedicated, high-performance analytics engine. IDC believes that the new Ryft ONE platform is well positioned to exploit the rapid growth we predict for the high-performance data analysis market.
Tags : ryft, ryft one platform, 1u deivce, advanced analytics, avoiding i/o bottlenecks, idc
     RYFT
By: EMC     Published Date: Jun 13, 2016
A Data Lake can meet the storage needs of your Modern Data Center. Check out the Top 10 Reasons your organization should adopt scale-out data lake storage for Hadoop Analytics on EMC Isilon.
Tags : 
     EMC
By: EMC     Published Date: Jun 13, 2016
EMC® Isilon® is a simple and scalable platform to build a scale-out data lake. Consolidate storage silos, improve storage utilization, reduce costs, and prove a future proofed platform to run today and tomorrow's workloads.
Tags : 
     EMC
By: EMC     Published Date: Jun 13, 2016
The EMC IsilonSD product family combines the power of Isilon scale-out NAS with the economy of software-defined storage. IsilonSD Edge is purpose built to address the needs associated with growing unstructured data in enterprise edge location including remote and branch offices.
Tags : 
     EMC
By: EMC     Published Date: Jun 13, 2016
IDC believes that EMC Isilon is indeed an easy to operate, highly scalable and efficient Enterprise Data Lake Platform. IDC validated that a shared storage model based on the Data Lake can in fact provide enterprise-grade service-levels while performing better than dedicated commodity off-the-shelf storage for Hadoop workloads.
Tags : 
     EMC
By: IBM     Published Date: Nov 14, 2014
Every day, the world creates 2.5 quintillion bytes of data and businesses are realizing tangible results from investments in big data analytics. IBM Spectrum Scale (GPFS) offers an enterprise class alternative to Hadoop Distributed File System (HDFS) for building big data platforms and provides a range of enterprise-class data management features. Spectrum Scale can be deployed independently or with IBM’s big data platform, consisting of IBM InfoSphere® BigInsights™ and IBM Platform™ Symphony. This document describes best practices for deploying Spectrum Scale in such environments to help ensure optimal performance and reliability.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
Platform Symphony is an enterprise-class server platform that delivers low-latency, scaled-out MapReduce workloads. It supports multiple applications running concurrently so that organizations can increase utilization across all resources resulting in a high return on investment.
Tags : 
     IBM
By: Intel     Published Date: Aug 06, 2014
Around the world and across all industries, high-performance computing is being used to solve today’s most important and demanding problems. More than ever, storage solutions that deliver high sustained throughput are vital for powering HPC and Big Data workloads. Intel® Enterprise Edition for Lustre* software unleashes the performance and scalability of the Lustre parallel file system for enterprises and organizations, both large and small. It allows users and workloads that need large scale, high- bandwidth storage to tap into the power and scalability of Lustre, but with the simplified installation, configuration, and monitoring features of Intel® Manager for Lustre* software, a management solution purpose-built for the Lustre file system.Intel ® Enterprise Edition for Lustre* software includes proven support from the Lustre experts at Intel, including worldwide 24x7 technical support. *Other names and brands may be claimed as the property of others.
Tags : 
     Intel
By: Intel     Published Date: Aug 06, 2014
Designing a large-scale, high-performance data storage system presents significant challenges. This paper describes a step-by-step approach to designing such a system and presents an iterative methodology that applies at both the component level and the system level. A detailed case study using the methodology described to design a Lustre* storage system is presented. *Other names and brands may be claimed as the property of others.
Tags : 
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflops of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Lustre and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters. Further, Intel Enterprise Edition for Lustre software is backed by Intel, the recognized technical support providers for Lustre, and includes 24/7 service level agreement (SLA) coverage.
Tags : 
     Intel
By: IBM APAC     Published Date: Jul 19, 2019
AI applications and especially deep learning systems are extremely demanding and require powerful parallel processing capabilities. IDC research shows that, in terms of core capacity, a large gap between actual and required CPU capability will develop in the next several years. IDC is seeing the worldwide market for accelerated servers grow to $25.6 billion in 2022, with a 31.6% CAGR. Indeed, this market is growing so fast that IDC is forecasting that by 2021,12% of worldwide server value will be from accelerated compute. Download this IDC report to find out why organizations like yours will need to make decisions about replacing existing general-purpose hardware or supplementing it with hardware dedicated to AI-specific processing tasks.
Tags : 
     IBM APAC
By: Stripe     Published Date: Aug 06, 2019
Payments is an increasingly strategic area of focus for enterprises, impacting market expansion, customer experience, business model evolution and, ultimately, revenue growth. As the role of payments in business strategy continues to expand, enterprises need secure, reliable and scalable infrastructure to underpin their transaction acceptance and processing capabilities. Stripe commissioned 451 Research to understand how large enterprise-scale merchants are thinking through their online payments infrastructure requirements. 451 Research surveyed 800 merchants across 8 countries, including a mix of business decision-makers from payments to finance to IT. KEY FINDINGS • 87% of mid- and large-sized businesses surveyed use the cloud as their dominant payments environment. • Nearly two-thirds of respondents using the public cloud for payments have seen improvements in security, innovation and uptime, while nearly three in five cited improved scalability. • Respondents using public-cloud-
Tags : payment security, platform as a service (paas), foreign currency transactions, fraud protection, payment solutions
     Stripe
By: Equinix EMEA     Published Date: May 22, 2019
This document outlines how Insurance providers must optimize and scale for policy collaboration in a rapidly changing, omni-channel environment while shifting to providing new types of digital insurance. Additionally, shifting demographics is transforming interaction between insurance providers, reinsurers and customers. To address, Insurance providers must move to the digital edge adjacent to customers, clouds, partners and remote devices.
Tags : 
     Equinix EMEA
By: F5 Networks Singapore Pte Ltd     Published Date: Sep 09, 2019
Tech advances like the cloud, mobile technology, and the app-based software model have changed the way today’s modern business operates. They’ve also changed the way criminals attack and steal from businesses. Criminals strive to be agile in much the same way that companies do. Spreading malware is a favorite technique among attackers. According to the 2019 Data Breach Investigations Report, 28% of data breaches included malware.ą While malware’s pervasiveness may not come as a surprise to many people, what’s not always so well understood is that automating app attacks—by means of malicious bots —is the most common way cybercriminals commit their crimes and spread malware. It helps them achieve scale.
Tags : 
     F5 Networks Singapore Pte Ltd
By: Nutanix     Published Date: Aug 22, 2019
Nutanix created hyperconverged infrastructure years ago because there was an urgent need for innovation within enterprise infrastructure. IT silos, management complexity, and gross inefficiencies were undermining the customer experience. It was time for a paradigm shift, which is why Nutanix melded webscale engineering with consumer-grade design to fundamentally transform the way organizations consume and leverage technology.
Tags : 
     Nutanix
By: Gigamon     Published Date: Sep 03, 2019
The IT pendulum is swinging to distributed computing environments, network perimeters are dissolving, and compute is being distributed across various parts of organizations’ infrastructure—including, at times, their extended ecosystem. As a result, organizations need to ensure the appropriate levels of visibility and security at these remote locations, without dramatically increasing staff or tools. They need to invest in solutions that can scale to provide increased coverage and visibility, but that also ensure efficient use of resources. By implementing a common distributed data services layer as part of a comprehensive security operations and analytics platform architecture (SOAPA) and network operations architecture, organizations can reduce costs, mitigate risks, and improve operational efficiency.
Tags : 
     Gigamon
By: Gigamon     Published Date: Sep 03, 2019
Network operations teams can no longer ignore the application layer. Application experience can make or break a digital enterprise, and today most enterprises are digital. To deliver optimal performance, network operations tools must be application-aware. However, application-awareness in the network and security tool layer is expensive and difficult to scale. Enterprises can mitigate these challenges with a network visibility architecture that includes application-aware network packet brokers (NPBs). EMA recommends that today’s network operations teams modernize their approach with full application visibility. EMA research has found that network teams are increasingly focused on directly addressing security risk reduction, service quality, end-user experience, and application performance. All of these new network operations benchmarks will require deeper application-level visibility. For instance, a network team focused on service quality will want to take a top-down approach to perfo
Tags : 
     Gigamon
By: IBM APAC     Published Date: Jul 19, 2019
With businesses developing larger data volumes to improve their competitiveness, their IT infrastructures are struggling to store and manage all of the data. To keep pace with this increase in data, organizations require a modern enterprise storage infrastructure that can scale to meet the demands of large data sets, while reducing the cost and complexity of infrastructure management. This white paper examines IBM’s FlashSystem 9100 solution and the benefits it can offer businesses.
Tags : 
     IBM APAC
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideBIGDATA White Paper Library contact: Kevin@insideHPC.com