it infrastructure

Results 1 - 25 of 2863Sort Results By: Published Date | Title | Company Name
By: NetApp     Published Date: Dec 13, 2013
FlexPod Select with Hadoop delivers enterprise class Hadoop with validated, pre-configured components for fast deployment, higher reliability and smoother integration with existing applications and infrastructure.  These technical reference architectures optimize storage, networking, and servers with Cloudera and Hortonworks distributions of Hadoop. Leverage FlexPod Select with Hadoop to help store, manage, process and perform advanced analytics on your multi-structured data.   Tuning parameters, optimization techniques among other Hadoop cluster guidelines  are provided.
Tags : flexpod with hadoop, enterprise data, storage infrastructure, massive amounts of data
     NetApp
By: IBM     Published Date: Sep 02, 2014
Life Sciences organizations need to be able to build IT infrastructures that are dynamic, scalable, easy to deploy and manage, with simplified provisioning, high performance, high utilization and able to exploit both data intensive and server intensive workloads, including Hadop MapReduce. Solutions must scale, both in terms of processing and storage, in order to better serve the institution long-term. There is a life cycle management of data, and making it useable for mainstream analyses and applications is an important aspect in system design. This presentation will describe IT requirements and how Technical Computing solutions from IBM and Platform Computing will address these challenges and deliver greater ROI and accelerated time to results for Life Sciences.
Tags : 
     IBM
By: IBM     Published Date: Sep 02, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm, ibm platform computing, save money
     IBM
By: IBM     Published Date: Sep 02, 2014
In today’s stringent financial services regulatory environment with exponential growth of data and dynamic business requirements, Risk Analytics has become integral to businesses. IBM Algorithmics provides very sophisticated analyses for a wide range of economic scenarios that better quantify risk for multiple departments within a firm, or across the enterprise. With Algorithmics, firms have a better handle on their financial exposure and credit risks before they finalize real-time transactions. But this requires the performance and agility of a scalable infrastructure; driving up IT risk and complexity. The IBM Application Ready Solution for Algorithmics provides an agile, reliable and high-performance infrastructure to deliver trusted risk insights for sustained growth and profitability. This integrated offering with a validated reference architecture delivers the right risk insights at the right time while lowering the total cost of ownership.
Tags : ibm, it risk, financial risk analytics
     IBM
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
IBM Platform Symphony - accelerate big data analytics – This demonstration will highlight the benefits and features of IBM Platform Symphony to accelerate big data analytics by maximizing distributed system performance, fully utilizing computing resources and effectively harnessing the power of Hadoop.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
According to our global study of more than 800 cloud decision makers and users are becoming increasingly focused on the business value cloud provides. Cloud is integral to mobile, social and analytics initiatives – and the big data management challenge that often comes with them and it helps power the entire suite of game-changing technologies. Enterprises can aim higher when these deployments are riding on the cloud. Mobile, analytics and social implementations can be bigger, bolder and drive greater impact when backed by scalable infrastructure. In addition to scale, cloud can provide integration, gluing the individual technologies into more cohesive solutions. Learn how companies are gaining a competitive advanatge with cloud computing.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: TIBCO     Published Date: Nov 09, 2015
As one of the most exciting and widely adopted open-source projects, Apache Spark in-memory clusters are driving new opportunities for application development as well as increased intake of IT infrastructure. Apache Spark is now the most active Apache project, with more than 600 contributions being made in the last 12 months by more than 200 organizations. A new survey conducted by Databricks—of 1,417 IT professionals working with Apache Spark finds that high-performance analytics applications that can work with big data are driving a large proportion of that demand. Apache Spark is now being used to aggregate multiple types of data in-memory versus only pulling data from Hadoop. For solution providers, the Apache Spark technology stack is a significant player because it’s one of the core technologies used to modernize data warehouses, a huge segment of the IT industry that accounts for multiple billions in revenue. Spark holds much promise for the future—with data lakes—a storage repo
Tags : 
     TIBCO
By: IBM     Published Date: Nov 14, 2014
Join Gartner, Inc. and IBM Platform Computing for an informative webinar where you will learn how to combine best of breed analytic solutions to provide a low latency, shared big data infrastructure. This helps government IT departments make faster decisions by analyzing massive amounts of data, improving security, detecting fraud, enabling faster decisions and saving cost by optimizing and sharing your existing infrastructure.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
View this series of short webcasts to learn how IBM Platform Computing products can help you ‘maximize the agility of your distributed computing environment’ by improving operational efficiency, simplify user experience, optimize application using and license sharing, address spikes in infrastructure demand and reduce data management costs.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
This demonstration shows how an organization using IBM Platform Computing workload managers can easily and securely tap resources in the IBM SoftLayer public cloud to handle periods of peak demand and reduce total IT infrastructure costs.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
To quickly and economically meet new and peak demands, Platform LSF (SaaS) and Platform Symphony (SaaS) workload management as well as Elastic Storage on Cloud data management software can be delivered as a service, complete with SoftLayer cloud infrastructure and 24x7 support for technical computing and service-oriented workloads. Watch this demonstration to learn how the IBM Platform Computing Cloud Service can be used to simplify and accelerate financial risk management using IBM Algorithmics.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
Software defined storage is enterprise class storage that uses standard hardware with all the important storage and management functions performed in intelligent software. Software defined storage delivers automated, policy-driven, application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software defined environment.
Tags : 
     IBM
By: Dell and Intel®     Published Date: Aug 24, 2015
To extract value from an ever-growing onslaught of data, your organization needs next-generation data management, integration, storage and processing systems that allow you to collect, manage, store and analyze data quickly, efficiently and cost-effectively. That’s the case with Dell| Cloudera® Apache™ Hadoop® solutions for big data. These solutions provide end-to-end scalable infrastructure, leveraging open source technologies, to allow you to simultaneously store and process large datasets in a distributed environment for data mining and analysis, on both structured and unstructured data, and to do it all in an affordable manner.
Tags : 
     Dell and Intel®
By: Solix     Published Date: Aug 03, 2015
Every CIO want to know if their infrastructure will handle it when data growth reaches 40 zettabytes by 2020. When data sets become to large, application performance slows and infrastructure struggles to keep up. Data growth drives increases cost and complexity everywhere, including power consumption, data center space, performance and availability. To find out more download the Gartner study now.
Tags : 
     Solix
By: Adaptive Computing     Published Date: Feb 27, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
Managing infrastructure has always brought frustration, headaches, and wasted time. That’s because IT professionals have to spend their days, nights, and weekends dealing with problems and manually tuning their infrastructure. Traditional monitoring and support are too far removed from infrastructure, resulting in an endless cycle of break-fix-tune-repeat. Infrastructure powered by artificial intelligence, however, can overcome the limitations of humans and traditional tools. This white paper explores how HPE InfoSight with its recommendation engine paves the path for an autonomous data center your Hybrid Cloud World.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
The bar for success is rising in higher education.  University leaders and IT administrators are aware of the compelling benefits of digital transformation overall—and artificial intelligence (AI) in particular. AI can amplify human capabilities by using machine learning, or deep learning, to convert the fast-growing and plentiful sources of data about all aspects of a university into actionable insights that drive better decisions. But when planning a transformational strategy, these leaders must prioritize operational continuity. It’s critical to protect the everyday activities of learning, research, and administration that rely on the IT infrastructure to consistently deliver data to its applications.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
Powerful IT doesn’t have to be complicated. Hyperconvergence puts your entire virtualized infrastructure and advanced data services into one integrated powerhouse. Deploy HCI on an intelligent fabric that can scale with your business and you can hyperconverge the entire IT stack. This guide will help you: Understand the basic tenets of hyperconvergence and the software-defined data center; Solve for common virtualization roadblocks; Identify 3 things modern organizations want from IT; Apply 7 hyperconverged tactics to your existing infrastructure now.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
"IT needs to reach beyond the traditional data center and the public cloud to form and manage a hybrid connected system stretching from the edge to the cloud, wherever the cloud may be. We believe this is leading to a new period of disruption and development that will require organizations to rethink and modernize their infrastructure more comprehensively than they have in the past. Hybrid cloud and hybrid cloud management will be the key pillars of this next wave of digital transformation – which is on its way much sooner than many have so far predicted. They have an important role to play as part of a deliberate and proactive cloud strategy, and are essential if the full benefits of moving over to a cloud model are to be fully realized."
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
With the maturing of the all-flash array (AFA) market, the established market leaders in this space are turning their attention to other ways to differentiate themselves from their competition besides just product functionality. Consciously designing and driving a better customer experience (CX) is a strategy being pursued by many of these vendors.This white paper defines cloud-based predictive analytics and discusses evolving storage requirements that are driving their use and takes a look at how these platforms are being used to drive incremental value for public sector organizations in the areas of performance, availability, management, recovery, and information technology (IT) infrastructure planning.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
Accelerate Digital Transformation. Make data center infrastructure cloud-ready. Optimize storage across clouds and data centers.
Tags : 
     Hewlett Packard Enterprise
By: Rackspace     Published Date: Feb 01, 2019
Rackspace Quick Start for Google Cloud Platform helps enterprises expedite their migration to Google Cloud using proven design, automation, and migration methodologies—all executed by Rackspace experts who have deployed more than a million applications into the cloud. By partnering with your company’s cross-functional leaders, our professional adoption team will fast-track your journey to the cloud—typically moving your first application(s) to the cloud within the first few weeks of the program. This annual review includes an assistance with a disaster recovery (DR) simulation, audit of patch levels, and upl eveling the deployment tools to ensure they align with the infrastructure that may have evolved since deployment.
Tags : 
     Rackspace
By: Lenovo - APAC     Published Date: Jan 23, 2019
For Japan’s largest medical equipment wholesaler, Mutoh, ensuring its 300,000 products reach hospitals, clinics and health-centres on time is an imperative. Someone’s life depends on it. Without IT that wouldn’t be possible. But Mutoh’s IT infrastructure was unable to keep pace with the demands of the business. Every time an order came in, Mutoh had to pull data from different systems across multiple servers which was time-consuming. It needed an IT infrastructure that was fast, high-performing, reliable, stable, and flexible. Mutoh turned to Lenovo’s hyperconverged infrastructure that helped the company achieve: • The ability to seamlessly extract data from different systems across multiple physical servers helped Mutoh respond to customer orders as quickly as possible, thereby saving lives • A modular hyperconverged solution allowed Mutoh to invest on a need basis, improving ROI and addressing spikes in customer demand
Tags : lenovodcg, nutanix, hyperconvergedinfrastructure, hci
     Lenovo - APAC
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideBIGDATA White Paper Library contact: Kevin@insideHPC.com