Apache HADOOP
Présentation
Apache Hadoop est un framework open source d'Apache Software Foundation qui a rapidement évolué pour devenir la technologie majeure de référence capable de traiter des quantités volumineuses de données structurées mais également non-structurées et complexes. MapReduce, HDSF (système de fichiers distribué de Hadoop) et YARN (gestion de ressources/charges) consituent le noyau de Hadoop.
Apache Hadoop est un framework open source d’Apache Software Foundation qui a rapidement évolué pour devenir la technologie majeure de référence capable de traiter des quantités volumineuses de données structurées mais également non-structurées et complexes. Sa popularité tient en partie à sa capacité à stocker, analyser et gérer l’accès à très gros volumes de données (pétaoctets) réparties à travers des milliers de nœuds de clusters avec une vélocité et une efficacité inégalées. MapReduce, framework de programmation massivement parallèle, est adapté au traitement de très grandes quantités de données. Une application typique MapReduce peut traiter plusieurs tera-octets de données en exploitant plusieurs milliers de machines. MapReduce, HDSF (système de fichiers distribué de Hadoop) et YARN (gestion de ressources/charges) consituent le noyau de Hadoop.
Hadoop Capabilities
Data Storage
The Hadoop Distributed File System (HDFS) provides scalable, fault-tolerant, cost-efficient storage for your big data lake. It was designed to span large clusters of commodity servers scaling up to hundreds of petabytes and thousands of servers. By distributing storage across many servers, the combined storage resource can grow linearly with demand while remaining economical at every amount of storage.
Data Processing
MapReduce is the original framework for writing massively parallel applications that process large amounts of structured and unstructured data stored in HDFS. MapReduce can take advantage of the locality of data, processing it near the place it is stored on each node in the cluster in order to reduce the distance over which it must be transmitted.
More recently, Apache Hadoop YARN opened Hadoop to other data processing engines that can now run alongside existing MapReduce jobs to process data in many different ways at the same time, such as Apache Spark. YARN provides the centralized resource management that enables you to process multiple workloads simultaneously. YARN is the foundation of the new generation of Hadoop and is enabling organizations everywhere to realize a modern data architecture.
Apache Tez is an extensible framework for building high performance batch and interactive data processing applications, coordinated by YARN in Apache Hadoop. Tez improves the MapReduce paradigm by dramatically improving its speed, while maintaining MapReduce’s ability to scale to petabytes of data.
Data Access and Analysis
Applications can interact with the data in Hadoop using batch or interactive SQL (Apache Hive) or low-latency access with NoSQL (Apache HBase). Hive allows business users and data analysts to use their preferred business analytics, reporting and visualization tools with Hadoop. Data stored in HDFS in Hadoop can be searched using Apache Solr.
Data Governance and Security
The Hadoop ecosystem extends data access and processing with powerful tools for data governance and integration including centralized security administration (Apache Ranger) and data classification tagging (Apache Atlas), which combined enable dynamic data access policies that proactively prevent data access violations from occurring. Hadoop perimeter security is also available to integrate with existing enterprise security systems and control user access to Hadoop (Apache Knox).
Uraeus Consult organise des ateliers de sensibilisation à la technologie Hadoop et à son potentiel.