Spark is a framework that helps in data analytics on a distributed computing cluster. It offers in-memory computations for the faster data processing over MapReduce. It uses the Hadoop Distributed File System (HDFS) and operates on top of the current Hadoop cluster. It also processes structured data in Hive along with streaming data from various sources like HDFS, Flume, Kafka, and Twitter.
SoftElegance’s presentation at Spark Summit Brussels
SoftElegance has presented it’s latest experience in oil and gas industry, speaking at Spark Conference Brussels, October 27. The topic is ‘Spark—universal computation engine for processing oil industry data’. There were presented math models to make failure prediction of industrial equipment, rod pumps failure prediction implementation, and the tech framework to do so. Thank you all who come to listen to us!
SoftElegance sponsored the ITO & BPO Germany Forum and presented “Big Data – The Future Of Software Development” session
23 of April 2015 SoftElegance participated as a sponsor and speakers at the ITO & BPO Germany Forum, the only international industry event in Germany that focuses on onshoring and nearshoring services preferred and shows opportunities to optimize these existing models and solutions.
The speakers are Andrii Stolbov, CEO of SoftElegance, who has founded the company in 1993, with the specialization in custom software development, mainly of sophisticated business software, and Andrii Starzhinsky, VP of Marketing, who joined the company more than 5 years ago, with the aim to provide innovative software development outsourcing services to German speaking countries, the Netherlands and Scandinavian markets. Both speakers are currently researching Big Data challenges, and practical aspects of implementing the new technologies at custom applications and Enterprise.
Big data market is estimated up to 40 billions of Euro, according to IDC through 2018. Till 2020 it would create 4.4 millions of IT jobs internationally. And the volume of business data, across all companies, doubles every 15 months. Every day it generated 2.5 exabytes of information, or 2.5 Millions of Terabytes.