it infrastructure

Results 1 - 25 of 2935Sort Results By: Published Date | Title | Company Name
By: NetApp     Published Date: Dec 13, 2013
FlexPod Select with Hadoop delivers enterprise class Hadoop with validated, pre-configured components for fast deployment, higher reliability and smoother integration with existing applications and infrastructure.  These technical reference architectures optimize storage, networking, and servers with Cloudera and Hortonworks distributions of Hadoop. Leverage FlexPod Select with Hadoop to help store, manage, process and perform advanced analytics on your multi-structured data.   Tuning parameters, optimization techniques among other Hadoop cluster guidelines  are provided.
Tags : flexpod with hadoop, enterprise data, storage infrastructure, massive amounts of data
     NetApp
By: IBM     Published Date: Sep 02, 2014
Life Sciences organizations need to be able to build IT infrastructures that are dynamic, scalable, easy to deploy and manage, with simplified provisioning, high performance, high utilization and able to exploit both data intensive and server intensive workloads, including Hadop MapReduce. Solutions must scale, both in terms of processing and storage, in order to better serve the institution long-term. There is a life cycle management of data, and making it useable for mainstream analyses and applications is an important aspect in system design. This presentation will describe IT requirements and how Technical Computing solutions from IBM and Platform Computing will address these challenges and deliver greater ROI and accelerated time to results for Life Sciences.
Tags : 
     IBM
By: IBM     Published Date: Sep 02, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm, ibm platform computing, save money
     IBM
By: IBM     Published Date: Sep 02, 2014
In today’s stringent financial services regulatory environment with exponential growth of data and dynamic business requirements, Risk Analytics has become integral to businesses. IBM Algorithmics provides very sophisticated analyses for a wide range of economic scenarios that better quantify risk for multiple departments within a firm, or across the enterprise. With Algorithmics, firms have a better handle on their financial exposure and credit risks before they finalize real-time transactions. But this requires the performance and agility of a scalable infrastructure; driving up IT risk and complexity. The IBM Application Ready Solution for Algorithmics provides an agile, reliable and high-performance infrastructure to deliver trusted risk insights for sustained growth and profitability. This integrated offering with a validated reference architecture delivers the right risk insights at the right time while lowering the total cost of ownership.
Tags : ibm, it risk, financial risk analytics
     IBM
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
IBM Platform Symphony - accelerate big data analytics – This demonstration will highlight the benefits and features of IBM Platform Symphony to accelerate big data analytics by maximizing distributed system performance, fully utilizing computing resources and effectively harnessing the power of Hadoop.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
According to our global study of more than 800 cloud decision makers and users are becoming increasingly focused on the business value cloud provides. Cloud is integral to mobile, social and analytics initiatives – and the big data management challenge that often comes with them and it helps power the entire suite of game-changing technologies. Enterprises can aim higher when these deployments are riding on the cloud. Mobile, analytics and social implementations can be bigger, bolder and drive greater impact when backed by scalable infrastructure. In addition to scale, cloud can provide integration, gluing the individual technologies into more cohesive solutions. Learn how companies are gaining a competitive advanatge with cloud computing.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: TIBCO     Published Date: Nov 09, 2015
As one of the most exciting and widely adopted open-source projects, Apache Spark in-memory clusters are driving new opportunities for application development as well as increased intake of IT infrastructure. Apache Spark is now the most active Apache project, with more than 600 contributions being made in the last 12 months by more than 200 organizations. A new survey conducted by Databricks—of 1,417 IT professionals working with Apache Spark finds that high-performance analytics applications that can work with big data are driving a large proportion of that demand. Apache Spark is now being used to aggregate multiple types of data in-memory versus only pulling data from Hadoop. For solution providers, the Apache Spark technology stack is a significant player because it’s one of the core technologies used to modernize data warehouses, a huge segment of the IT industry that accounts for multiple billions in revenue. Spark holds much promise for the future—with data lakes—a storage repo
Tags : 
     TIBCO
By: IBM     Published Date: Nov 14, 2014
Join Gartner, Inc. and IBM Platform Computing for an informative webinar where you will learn how to combine best of breed analytic solutions to provide a low latency, shared big data infrastructure. This helps government IT departments make faster decisions by analyzing massive amounts of data, improving security, detecting fraud, enabling faster decisions and saving cost by optimizing and sharing your existing infrastructure.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
View this series of short webcasts to learn how IBM Platform Computing products can help you ‘maximize the agility of your distributed computing environment’ by improving operational efficiency, simplify user experience, optimize application using and license sharing, address spikes in infrastructure demand and reduce data management costs.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
This demonstration shows how an organization using IBM Platform Computing workload managers can easily and securely tap resources in the IBM SoftLayer public cloud to handle periods of peak demand and reduce total IT infrastructure costs.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
To quickly and economically meet new and peak demands, Platform LSF (SaaS) and Platform Symphony (SaaS) workload management as well as Elastic Storage on Cloud data management software can be delivered as a service, complete with SoftLayer cloud infrastructure and 24x7 support for technical computing and service-oriented workloads. Watch this demonstration to learn how the IBM Platform Computing Cloud Service can be used to simplify and accelerate financial risk management using IBM Algorithmics.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
Software defined storage is enterprise class storage that uses standard hardware with all the important storage and management functions performed in intelligent software. Software defined storage delivers automated, policy-driven, application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software defined environment.
Tags : 
     IBM
By: Dell and Intel®     Published Date: Aug 24, 2015
To extract value from an ever-growing onslaught of data, your organization needs next-generation data management, integration, storage and processing systems that allow you to collect, manage, store and analyze data quickly, efficiently and cost-effectively. That’s the case with Dell| Cloudera® Apache™ Hadoop® solutions for big data. These solutions provide end-to-end scalable infrastructure, leveraging open source technologies, to allow you to simultaneously store and process large datasets in a distributed environment for data mining and analysis, on both structured and unstructured data, and to do it all in an affordable manner.
Tags : 
     Dell and Intel®
By: Solix     Published Date: Aug 03, 2015
Every CIO want to know if their infrastructure will handle it when data growth reaches 40 zettabytes by 2020. When data sets become to large, application performance slows and infrastructure struggles to keep up. Data growth drives increases cost and complexity everywhere, including power consumption, data center space, performance and availability. To find out more download the Gartner study now.
Tags : 
     Solix
By: Adaptive Computing     Published Date: Feb 27, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Cisco     Published Date: Mar 22, 2019
The Secure Data Center is a place in the network (PIN) where a company centralizes data and performs services for business. Data centers contain hundreds to thousands of physical and virtual servers that are segmented by applications, zones, and other methods. This guide addresses data center business flows and the security used to defend them. The Secure Data Center is one of the six places in the network within SAFE. SAFE is a holistic approach in which Secure PINs model the physical infrastructure and Secure Domains represent the operational aspects of a network.
Tags : 
     Cisco
By: Cisco     Published Date: Mar 22, 2019
Hyperconverged infrastructure solutions are making substantial inroads into a broader set of use cases and deployment options, but limitations exist. I&O leaders should view HCI solutions as tools in the toolbox, rather than panaceas for all IT infrastructure problems.
Tags : 
     Cisco
By: Cisco     Published Date: Mar 22, 2019
Artificial intelligence (AI) and machine learning (ML) are emerging technologies that will transform organizations faster than ever before. In the digital transformation era, success will be based on using analytics to discover the insights locked in the massive volume of data being generated today. Historically, these insights were discovered through manually intensive data analytics—but the amount of data continues to grow, as does the complexity of data. AI and ML are the latest tools for data scientists, enabling them to refine the data into value faster.
Tags : 
     Cisco
By: Rubrik EMEA     Published Date: Apr 15, 2019
Ransomware is not going away. This makes it imperative for businesses across all industries to adopt a data management strategy of multi-layered security, easy automation, and quick recovery. To learn more about Rubrik and how it can ?t into your ransomware protection strategy while simplifying data protection across your entire datacenter, visit www.rubrik.com. As the leading next-generation data protection solution, Rubrik deploys as a plug-and-play appliance in less than an hour and has been adopted across all verticals and organization sizes including Fortune 50 companies.
Tags : data, backup, recovery, backups, solution, security, encryption, cyber, privacy, infrastructure
     Rubrik EMEA
By: Rubrik EMEA     Published Date: Apr 15, 2019
Backup and recovery needs a radical rethink. When today’s incumbent solutions were designed over a decade ago, IT environments were exploding, heterogeneity was increasing, and backup was the protection of last resort. The goal was to provide a low cost insurance policy for data, and to support this increasingly complex multi-tier, heterogeneous environment. The answer was to patch together backup and recovery solutions under a common vendor management framework and to minimize costs by moving data across the infrastructure or media.
Tags : data, backup, recovery, virtualization, solutions, cloud, architectures
     Rubrik EMEA
By: Dell EMC     Published Date: Feb 14, 2019
Technology is quickly moving to the forefront as organizations undertake digital and IT transformation projects that enable strategic differentiation in a world where users are leveraging applications and data in new ways. The reality is, most organizations were not born digital but instead have legacy business processes, applications, and infrastructure that require modernization and automation. Read this executive summary from Dell and Intel® to learn why businesses must embark on IT transformation to modernize and automate their legacy infrastructure and prime themselves to achieve their digital business goals. Intel Inside®. Powerful Productivity Outside.
Tags : 
     Dell EMC
By: CloudHealth by VMware     Published Date: Apr 15, 2019
Organisations moving to AWS seek improved performance, increased innovation, and a faster time to market—but the road to cloud maturity, and ultimately cloud success, proves both challenging and expensive. Learn to accelerate your AWS cloud journey with: A checklist for determining if you have clear visibility into your AWS environment Expert tips for developing proper cloud security best practices Real examples of financial, performance, and security management policies for automating your cloud ecosystem Looking to optimise your AWS cloud infrastructure? Use this eBook to regain control over your visibility and cost management, security and compliance, and governance and automation.
Tags : 
     CloudHealth by VMware
By: CloudHealth by VMware     Published Date: Apr 15, 2019
Govern Your AWS Environment with Automated Policies If you manage a dynamic cloud environment, you already know that managing your assets and dealing with continuous changes is time consuming. Read this eBook to learn the best practices for policies that you must put in place to reduce the time it takes to optimize and manage your infrastructure.
Tags : 
     CloudHealth by VMware
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideBIGDATA White Paper Library contact: Kevin@insideHPC.com