workloads

Results 1 - 25 of 594Sort Results By: Published Date | Title | Company Name
By: IBM     Published Date: Sep 02, 2014
Life Sciences organizations need to be able to build IT infrastructures that are dynamic, scalable, easy to deploy and manage, with simplified provisioning, high performance, high utilization and able to exploit both data intensive and server intensive workloads, including Hadop MapReduce. Solutions must scale, both in terms of processing and storage, in order to better serve the institution long-term. There is a life cycle management of data, and making it useable for mainstream analyses and applications is an important aspect in system design. This presentation will describe IT requirements and how Technical Computing solutions from IBM and Platform Computing will address these challenges and deliver greater ROI and accelerated time to results for Life Sciences.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
The IBM Spectrum Scale solution provided for up to 11x better throughput results than EMC Isilon for Spectrum Protect (TSM) workloads. Using published data, Edison compared a solution comprised of EMC® Isilon® against an IBM® Spectrum Scale™ solution. (IBM Spectrum Scale was formerly IBM® General Parallel File System™ or IBM® GPFS™, also known as code name Elastic Storage). For both solutions, IBM® Spectrum Protect™ (formerly IBM Tivoli® Storage Manager or IBM® TSM®) is used as a common workload performing the backups to target storage systems evaluated.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: EMC     Published Date: Jun 13, 2016
EMC® Isilon® is a simple and scalable platform to build a scale-out data lake. Consolidate storage silos, improve storage utilization, reduce costs, and prove a future proofed platform to run today and tomorrow's workloads.
Tags : 
     EMC
By: EMC     Published Date: Jun 13, 2016
IDC believes that EMC Isilon is indeed an easy to operate, highly scalable and efficient Enterprise Data Lake Platform. IDC validated that a shared storage model based on the Data Lake can in fact provide enterprise-grade service-levels while performing better than dedicated commodity off-the-shelf storage for Hadoop workloads.
Tags : 
     EMC
By: IBM     Published Date: Nov 14, 2014
Platform Symphony is an enterprise-class server platform that delivers low-latency, scaled-out MapReduce workloads. It supports multiple applications running concurrently so that organizations can increase utilization across all resources resulting in a high return on investment.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
To quickly and economically meet new and peak demands, Platform LSF (SaaS) and Platform Symphony (SaaS) workload management as well as Elastic Storage on Cloud data management software can be delivered as a service, complete with SoftLayer cloud infrastructure and 24x7 support for technical computing and service-oriented workloads. Watch this demonstration to learn how the IBM Platform Computing Cloud Service can be used to simplify and accelerate financial risk management using IBM Algorithmics.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
Organizations of all sizes need help building clusters and grids to support compute- and data-intensive application workloads. Read how the Hartree Centre is building several high-performance computing clusters to support a variety of research projects.
Tags : 
     IBM
By: Intel     Published Date: Aug 06, 2014
Purpose-built for use with the dynamic computing resources available from Amazon Web Services ™, the Intel Lustre* solution provides the fast, massively scalable storage software needed to accelerate performance, even on complex workloads. Intel is a driving force behind the development of Lustre, and committed to providing fast, scalable, and cost effective storage with added support and manageability. Intel ® Enterprise Edition for Lustre* software is the ideal foundation. *Other names and brands may be claimed as the property of others.
Tags : 
     Intel
By: Intel     Published Date: Aug 06, 2014
Around the world and across all industries, high-performance computing is being used to solve today’s most important and demanding problems. More than ever, storage solutions that deliver high sustained throughput are vital for powering HPC and Big Data workloads. Intel® Enterprise Edition for Lustre* software unleashes the performance and scalability of the Lustre parallel file system for enterprises and organizations, both large and small. It allows users and workloads that need large scale, high- bandwidth storage to tap into the power and scalability of Lustre, but with the simplified installation, configuration, and monitoring features of Intel® Manager for Lustre* software, a management solution purpose-built for the Lustre file system.Intel ® Enterprise Edition for Lustre* software includes proven support from the Lustre experts at Intel, including worldwide 24x7 technical support. *Other names and brands may be claimed as the property of others.
Tags : 
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflops of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Lustre and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters. Further, Intel Enterprise Edition for Lustre software is backed by Intel, the recognized technical support providers for Lustre, and includes 24/7 service level agreement (SLA) coverage.
Tags : 
     Intel
By: Lenovo UK     Published Date: Oct 01, 2019
Businesses are using hyperconverged infrastructure (HCI) to untangle today’s big IT challenges. HCI can help you accelerate workloads, meet growing storage needs, gain the commercial benefits of hybrid cloud and more – all with easy-to-manage building blocks.
Tags : 
     Lenovo UK
By: Intel     Published Date: Nov 14, 2019
Infrastructure considerations for IT leaders By 2020, deep learning will have reached a fundamentally different stage of maturity. Deployment and adoption will no longer be confined to experimentation, becoming a core part of day-to-day business operations across most fields of research and industries. Why? Because advancements in the speed and accuracy of the hardware and software that underpin deep learning workloads have made it both viable and cost-effective. Much of this added value will be generated by deep learning inference – that is, using a model to infer something about data it has never seen before. Models can be deployed in the cloud or data center, but more and more we will see them on end devices like cameras and phones. Intel predicts that there will be a shift in the ratio between cycles of inference and training from 1:1 in the early days of deep learning, to well over 5:1 by 2020¹. Intel calls this the shift to ‘inference at scale’ and, with inference also taking up
Tags : 
     Intel
By: Intel     Published Date: Nov 14, 2019
-memory databases have moved from being an expensive option for exceptional analytics workloads, to delivering on a far broader set of goals. Using real-world examples and scenarios, in this guide we look at the component parts of in-memory analytics, and consider how to ensure such capabilities can be deployed to maximize business value from both the technology and the insights generated.
Tags : 
     Intel
By: Intel     Published Date: Nov 14, 2019
Evaluating an existing HPC platform for efficient AI-driven workloads The growth of artificial intelligence (AI) capabilities, data volumes and computing power in recent years mean that AI is now a serious consideration for most organizations. When combined with high-performance computing (HPC) capabilities, its potential grows even stronger. However, converging the two approaches requires careful thought and planning. New AI initiatives must align with organizational strategy. Then AI workloads must integrate with your existing HPC infrastructure to achieve the best results in the most efficient and cost- effective way. This paper outlines the key considerations for organizations looking to bring AI into their HPC environment, and steps they can take to ensure the success of their first forays into HPC/AI convergence.
Tags : 
     Intel
By: Infinidat EMEA     Published Date: May 14, 2019
Big Data and analytics workloads represent a new frontier for organizations. Data is being collected from sources that did not exist 10 years ago. Mobile phone data, machine-generated data, and website interaction data are all being collected and analyzed. In addition, as IT budgets are already under pressure, Big Data footprints are getting larger and posing a huge storage challenge. This paper provides information on the issues that Big Data applications pose for storage systems and how choosing the correct storage infrastructure can streamline and consolidate Big Data and analytics applications without breaking the bank.
Tags : 
     Infinidat EMEA
By: Infinidat EMEA     Published Date: May 14, 2019
Infinidat has developed a storage platform that provides unique simplicity, efficiency, reliability, and extensibility that enhances the business value of large-scale OpenStack environments. The InfiniBox® platform is a pre-integrated solution that scales to multiple petabytes of effective capacity in a single 42U rack. The platform’s innovative combination of DRAM, flash, and capacity-optimized disk, delivers tuning-free, high performance for consolidated mixed workloads, including object/Swift, file/Manila, and block/Cinder. These factors combine to cut direct and indirect costs associated with large-scale OpenStack infrastructures, even versus “build-it-yourself” solutions. InfiniBox delivers seven nines (99.99999%) of availability without resorting to expensive replicas or slow erasure codes for data protection. Operations teams appreciate our delivery model designed to easily drop into workflows at all levels of the stack, including native Cinder integration, Ansible automation pl
Tags : 
     Infinidat EMEA
By: Infinidat EMEA     Published Date: Oct 10, 2019
Big Data- und Analytik-Workloads bringen für Unternehmen neue Herausforderungen mit sich. Die erfassten Daten stammen aus Quellen, die vor zehn Jahren noch gar nicht existierten. Es werden Daten von Mobiltelefonen, maschinengenerierte Daten und Daten aus Webseiten-Interaktionen erfasst und analysiert. In Zeiten knapper IT-Budgets wird die Lage zusätzlich dadurch verschärft, dass die Big Data-Volumen immer größer werden und zu enormen Speicherproblemen führen. Das vorliegende White Paper informiert über die Probleme, die Big Data-Anwendungen für Storage-Systeme mit sich bringen, sowie darüber, wie die Auswahl der richtigen Storage-Infrastruktur Big Data- und Analytik-Anwendungen optimieren kann, ohne das Budget zu sprengen.
Tags : 
     Infinidat EMEA
By: Infinidat EMEA     Published Date: Oct 10, 2019
I Big Data e gli analytics workloads sono la nuova frontiera per le aziende. I dati vengono raccolti da fonti che non esistevano 10 anni fa. Tutti i dati dei telefoni cellulari, i dati generati dalle macchine e i dati relativi all’interazione con i siti vengono raccolti e analizzati. Inoltre, con i budget IT sempre più sotto pressione, l’impatto ambientale dei Big Data non fa che aumentare e pone grandi sfi de per i sistemi storage. Questo documento fornisce informazioni sulle problematiche che le applicazioni dei Big Data pongono sullo storage e su come scegliere le più corrette infrastrutture per ottimizzare e consolidare le applicazioni dei Big Data e degli analytics, senza prosciugare le fi nanze.
Tags : 
     Infinidat EMEA
By: Rackspace     Published Date: Nov 06, 2019
Insights into avoiding migration regret amongst CxO’s: The decision to move business workloads and applications to the cloud impacts all parts of the business and isn’t a decision isolated to the IT team. Our latest research study on the different motives, concerns and experiences of executive peers and business stakeholders when securing buy-in for a strategic IT move found 97% of C-level executives in ANZ suffered from migration regret during their first cloud migration. Packed with telling hindsight, over 200 c-suite executives shared their expectations and experiences during the cloud migration journey. They reveal what they would have done differently - namely enhanced communication and a clear plan of action - and offer practical advice to help others get buy-in internally and secure funding to support a move to the cloud.
Tags : 
     Rackspace
By: Apptio     Published Date: Oct 09, 2019
Cloud migration is consistently one of the top priorities of technology leaders across the world today, but many are overwhelmed by trying to plan their cloud migration, struggling to prioritize workloads and unsure of the cost implications. Download this white paper to discover the 5 key steps for cloud migration based on the best practices of today’s most successful IT leaders: - Baseline TCO resources (cloud, on-premises, hybrid) - Map current on-premises resources to cloud offerings - Evaluate and prioritize migration strategy - Calculate migration costs - Define success metrics
Tags : 
     Apptio
By: AWS     Published Date: Nov 04, 2019
As organizations expand their cloud footprints, they need to reevaluate and consider who has access to their infrastructure at any given time. This can often be a large undertaking and lead to complex, sprawling network security interfaces across applications, workloads, and containers. Aporeto on Amazon Web Services (AWS) enables security administrators to unify their security management and visibility to create consistent policies across all their instances and containerized environments. Join the upcoming webinar to learn how Informatica leveraged Aporeto to create secure, keyless access for all their users.
Tags : 
     AWS
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
This book helps you understand both sides of the hybrid IT equation and how HPE can help your organization transform its IT operations and save time and money in the process. I delve into the worlds of security, economics, and operations to show you new ways to support your business workloads.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jul 18, 2018
Discover HPE OneSphere, a hybrid cloud management solution that enables IT to deliver private infrastructure with public-cloud ease. With the proliferation of self-service, on-demand infrastructure, enterprise developers have come to expect infrastructure as a service. However, the constraints of existing infrastructure and tools make this mean heavier workloads for IT teams.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Apr 26, 2019
"Customers in the midst of digital transformation look first to the public cloud when seeking dramatic simplification, cost, utilization and flexibility advantages for their new application workloads. But using public cloud comes with its own challenges. It’s often not as easy as advertised. And it’s not always possible or practical for enterprises to retire their traditional, on-premises enterprise system still running the company’s most mission-critical workloads. Hybrid IT – a balanced combination of traditional infrastructure, private cloud and public cloud – is the answer. "
Tags : 
     Hewlett Packard Enterprise
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideBIGDATA White Paper Library contact: Kevin@insideHPC.com