Thursday, September 29, 2022
HomeBusiness AnalyticsTeradata Expands Capabilities For Knowledge Lakes With Apache Spark

Teradata Expands Capabilities For Knowledge Lakes With Apache Spark


Apr 13, 2016 | HADOOP SUMMIT, DUBLIN, Eire

Spark deployment challenges immediate rising demand for Teradata’s massive information companies internationally

Teradata (NYSE: TDC), the massive information analytics and advertising purposes firm, at present introduced that Assume Large, a worldwide Teradata consulting apply with management experience in deploying Apache Spark™ and different massive information applied sciences, is increasing its information lake and managed service choices utilizing Apache Spark. Spark is an open supply cluster computing platform used for product suggestions, predictive analytics, sensor information evaluation, graph analytics and extra.

At the moment, clients can use an information lake with Apache Spark within the cloud, on common “commodity constructed” Hadoop environments, or with Teradata’s Hadoop Equipment, essentially the most highly effective, ready-to-run enterprise platform, preconfigured and optimized to run enterprise-class massive information workloads.

Whereas curiosity in Spark continues to extend, many corporations wrestle to maintain up with the speedy tempo of change and frequency of releases of the open supply platform. Assume Large has efficiently included Spark in its frameworks for constructing enterprise-quality information lakes and analytical purposes.

“Many organizations are experimenting with Apache Spark, in hopes of leveraging its strengths with streaming information, question, and analytics – typically at the side of an information lake,” stated Philip Russom, Ph.D., director of information administration analysis, The Knowledge Warehousing Institute (TDWI). “However customers quickly understand that Spark just isn’t simple to make use of and that information lakes take extra planning and design than they thought. Customers on this state of affairs want to show to outdoors assist in the type of consultants and managed service suppliers who’ve a observe file of success with Apache Spark and information lakes throughout a various clientele. Assume Large has such expertise.”

Assume Large is constructing replicable service packages for Spark deployment together with including Spark as an execution engine for its Knowledge Lake and Managed Providers gives. By way of its coaching branch–Assume Large Academy—the consultancy can be launching a sequence of recent Spark coaching gives for company purchasers. Led by skilled instructors, these lessons assist prepare managers, builders, and directors on utilizing Spark and its numerous modules together with machine studying, graph, streaming and question.

Additionally, Assume Large’s Knowledge Science group will open supply routines for distributed Okay-Modes clustering with Spark’s Python software programming interface (API). These routines enhance clustering of categorical information for buyer segmentation and churn evaluation. This code can be out there with different Assume Large open supply efforts on Assume Large’s GitHub web page.

“Our Assume Large consulting apply is increasing shortly from the Americas throughout Europe and China as a result of demand is exploding for the experience, expertise and strategies to assist corporations get an information lake utilizing Spark and Hadoop proper, the primary time,” stated Ron Bodkin, president of Assume Large. “The deployment of Spark must be a part of an data and analytics technique. We all know from expertise what use instances are related, what the suitable questions are, and the place to look at for deployment landmines. We perceive enterprise consumer expectations in addition to know-how necessities. We may help generate tangible enterprise worth, and our Spark clients are already doing so in domains starting from omni-channel client personalization to real-time failure detection in high-tech manufacturing.”

Lengthy earlier than massive information buzz turned fashionable, Assume Large was already the world’s first and main pure-play massive information companies agency, implementing analytic options primarily based on rising applied sciences. At the moment, Assume Large gives managed companies for Hadoop within the areas of platform and software help with well-defined processes, strong instruments, and skilled massive information consultants to affordably handle, monitor, and preserve the Hadoop platform. Initiating every engagement with a well-tested transition course of, Assume Large assesses and improves a consumer’s manufacturing help, improvement, and sustainment groups – for environment friendly, efficient deployment.

Related Information Hyperlinks

  • Assume Large SPARK enablement companies: For particulars go to the Assume Large internet web page
  • Teradata positioned as a Chief within the 2016 Gartner Magic Quadrant for Knowledge Warehouse and Knowledge Administration Options for Analytics – Get the brand new report right here



Teradata is the linked multi-cloud information platform for enterprise analytics firm. Our enterprise analytics remedy enterprise challenges from begin to scale. Solely Teradata offers you the pliability to deal with the huge and combined information workloads of the longer term, at present. Study extra at Teradata.com.


RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments