
Online or onsite, instructor-led live Big Data training courses start with an introduction to elemental concepts of Big Data, then progress into the programming languages and methodologies used to perform Data Analysis. Tools and infrastructure for enabling Big Data storage, Distributed Processing, and Scalability are discussed, compared and implemented in demo practice sessions.
Big Data training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Luxembourg onsite live Big Data trainings can be carried out locally on customer premises or in NobleProg corporate training centers.
NobleProg -- Your Local Training Provider
Testimonials
The fact that all the data and software was ready to use on an already prepared VM, provided by the trainer in external disks.
vyzVoice
Course: Hadoop for Developers and Administrators
The trainer was so knowledgeable and included areas I was interested in.
Mohamed Salama
Course: Data Mining & Machine Learning with R
Very tailored to needs.
Yashan Wang
Course: Data Mining with R
Richard is very calm and methodical, with an analytic insight - exactly the qualities needed to present this sort of course.
Kieran Mac Kenna
Course: Spark for Developers
I like the exercises done.
Nour Assaf
Course: Data Mining and Analysis
The hands-on exercise and the trainer capacity to explain complex topics in simple terms.
youssef chamoun
Course: Data Mining and Analysis
The information given was interesting and the best part was towards the end when we were provided with Data from Durex and worked on Data we are familiar with and perform operations to get results.
Jessica Chaar
Course: Data Mining and Analysis
I mostly liked the trainer giving real live Examples.
Simon Hahn
Course: Administrator Training for Apache Hadoop
I genuinely enjoyed the big competences of Trainer.
Grzegorz Gorski
Course: Administrator Training for Apache Hadoop
I genuinely enjoyed the many hands-on sessions.
Jacek Pieczątka
Course: Administrator Training for Apache Hadoop
I thought that the information was interesting.
Allison May
Course: Data Visualization
I really appreciated that Jeff utilized data and examples that were applicable to education data. He made it interesting and interactive.
Carol Wells Bazzichi
Course: Data Visualization
Learning about all the chart types and what they are used for. Learning the value of cluttering. Learning about the methods to show time data.
Susan Williams
Course: Data Visualization
Trainer was enthusiastic.
Diane Lucas
Course: Data Visualization
I really liked the content / Instructor.
Craig Roberson
Course: Data Visualization
I am a hands-on learner and this was something that he did a lot of.
Lisa Comfort
Course: Data Visualization
I liked the examples.
Peter Coleman
Course: Data Visualization
I liked the examples.
Peter Coleman
Course: Data Visualization
I enjoyed the good real world examples, reviews of existing reports.
Ronald Parrish
Course: Data Visualization
I really was benefit from the willingness of the trainer to share more.
Balaram Chandra Paul
Course: A practical introduction to Data Analysis and Big Data
We know a lot more about the whole environment.
John Kidd
Course: Spark for Developers
The trainer made the class interesting and entertaining which helps quite a bit with all day training.
Ryan Speelman
Course: Spark for Developers
I think the trainer had an excellent style of combining humor and real life stories to make the subjects at hand very approachable. I would highly recommend this professor in the future.
Course: Spark for Developers
Liked very much the interactive way of learning.
Luigi Loiacono
Course: Data Analysis with Hive/HiveQL
It was a very practical training, I liked the hands-on exercises.
Proximus
Course: Data Analysis with Hive/HiveQL
I was benefit from the good overview, good balance between theory and exercises.
Proximus
Course: Data Analysis with Hive/HiveQL
I enjoyed the dynamic interaction and “hands-on” the subject, thanks to the Virtual Machine, very stimulating!.
Philippe Job
Course: Data Analysis with Hive/HiveQL
Ernesto did a great job explaining the high level concepts of using Spark and its various modules.
Michael Nemerouf
Course: Spark for Developers
I was benefit from the competence and knowledge of the trainer.
Jonathan Puvilland
Course: Data Analysis with Hive/HiveQL
I generally was benefit from the presentation of technologies.
Continental AG / Abteilung: CF IT Finance
Course: A practical introduction to Data Analysis and Big Data
Overall the Content was good.
Sameer Rohadia
Course: A practical introduction to Data Analysis and Big Data
Michael the trainer is very knowledgeable and skillful about the subject of Big Data and R. He is very flexible and quickly customize the training meeting clients' need. He is also very capable to solve technical and subject matter problems on the go. Fantastic and professional training!.
Xiaoyuan Geng - Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course: Programming with Big Data in R
I really enjoyed the introduction of new packages.
Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course: Programming with Big Data in R
The tutor, Mr. Michael An, interacted with the audience very well, the instruction was clear. The tutor also go extent to add more information based on the requests from the students during the training.
Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course: Programming with Big Data in R
The subject matter and the pace were perfect.
Tim - Ottawa Research and Development Center, Science Technology Branch, Agriculture and Agri-Food Canada
Course: Programming with Big Data in R
The example and training material were sufficient and made it easy to understand what you are doing.
Teboho Makenete
Course: Data Science for Big Data Analytics
This is one of the best hands-on with exercises programming courses I have ever taken.
Laura Kahn
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
This is one of the best quality online training I have ever taken in my 13 year career. Keep up the great work!.
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
It was very hands-on, we spent half the time actually doing things in Clouded/Hardtop, running different commands, checking the system, and so on. The extra materials (books, websites, etc. .) were really appreciated, we will have to continue to learn. The installations were quite fun, and very handy, the cluster setup from scratch was really good.
Ericsson
Course: Administrator Training for Apache Hadoop
Richard's training style kept it interesting, the real world examples used helped to drive the concepts home.
Jamie Martin-Royle - NBrown Group
Course: From Data to Decision with Big Data and Predictive Analytics
The content, as I found it very interesting and think it would help me in my final year at University.
Krishan Mistry - NBrown Group
Course: From Data to Decision with Big Data and Predictive Analytics
The trainer was fantastic and really knew his stuff. I learned a lot about the software I didn't know previously which will help a lot at my job!
Steve McPhail - Alberta Health Services - Information Technology
Course: Data Analysis with Hive/HiveQL
The high level principles about Hive, HDFS..
Geert Suys - Proximus Group
Course: Data Analysis with Hive/HiveQL
The handson. The mix practice/theroy
Proximus Group
Course: Data Analysis with Hive/HiveQL
Fulvio was able to grasp our companies business case and was able to correlate with the course material, almost instantly.
Samuel Peeters - Proximus Group
Course: Data Analysis with Hive/HiveQL
Lot of hands-on exercises.
Ericsson
Course: Administrator Training for Apache Hadoop
Ambari management tool. Ability to discuss practical Hadoop experiences from other business case than telecom.
Ericsson
Course: Administrator Training for Apache Hadoop
I enjoyed the good balance between theory and hands-on labs.
N. V. Nederlandse Spoorwegen
Course: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
I generally was benefit from the more understanding of Ignite.
N. V. Nederlandse Spoorwegen
Course: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
I mostly liked the good lectures.
N. V. Nederlandse Spoorwegen
Course: Apache Ignite: Improve Speed, Scale and Availability with In-Memory Computing
I think the trainer had an excellent style of combining humor and real life stories to make the subjects at hand very approachable. I would highly recommend this professor in the future.
Course: Spark for Developers
This is one of the best quality online training I have ever taken in my 13 year career. Keep up the great work!.
Course: Artificial Intelligence - the most applied stuff - Data Analysis + Distributed AI + NLP
Big Data Course Outlines in Luxembourg
Learning to work with SPSS at the level of independence
The addressees:
Analysts, researchers, scientists, students and all those who want to acquire the ability to use SPSS package and learn popular data mining techniques.
In this instructor-led, live training, participants will learn how to use MonetDB and how to get the most value out of it.
By the end of this training, participants will be able to:
- Understand MonetDB and its features
- Install and get started with MonetDB
- Explore and perform different functions and tasks in MonetDB
- Accelerate the delivery of their project by maximizing MonetDB capabilities
Audience
- Developers
- Technical experts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
- to execute SQL queries.
- to read data from an existing Hive installation.
In this instructor-led, live training (onsite or remote), participants will learn how to analyze various types of data sets using Spark SQL.
By the end of this training, participants will be able to:
- Install and configure Spark SQL.
- Perform data analysis using Spark SQL.
- Query data sets in different formats.
- Visualize data and query results.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Attendees will learn during this course how to manage the big data using its three pillars of data integration, data governance and data security in order to turn big data into real business value. Different exercices conducted on a case study of customer management will help attendees to better understand the underlying processes.
In this instructor-led, live training, participants will learn the fundamentals of Apache Hama as they step through the creation of a BSP-based application and a vertex-centric program using the Apache Hama frameworks.
By the end of this training, participants will be able to:
- Install and configure Apache Hama
- Understand the fundamentals of Apache Hama and the Bulk Synchronous Parallel (BSP) programming model
- Build a BSP-based program using Apache Hama BSP framework
- Build a vertex-centric program using Apache Hama Graph Framework
- Build, test, and debug their own Apache Hama applications
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn how to integrate Kafka Streams into a set of sample Java applications that pass data to and from Apache Kafka for stream processing.
By the end of this training, participants will be able to:
- Understand Kafka Streams features and advantages over other stream processing frameworks
- Process stream data directly within a Kafka cluster
- Write a Java or Scala application or microservice that integrates with Kafka and Kafka Streams
- Write concise code that transforms input Kafka topics into output Kafka topics
- Build, package and deploy the application
Audience
- Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange
In this instructor-led, live training, participants will learn how to install, configure and use Dremio as a unifying layer for data analysis tools and the underlying data repositories.
By the end of this training, participants will be able to:
- Install and configure Dremio
- Execute queries against multiple data sources, regardless of location, size, or structure
- Integrate Dremio with BI and data sources such as Tableau and Elasticsearch
Audience
- Data scientists
- Business analysts
- Data engineers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn how to optimize and debug Apache Drill to improve the performance of queries on very large data sets. The course begins with an architectural overview and feature comparison between Apache Drill and other interactive data analysis tools. Participants then step through a series of interactive, hands-on practice sessions that include installation, configuration, performance evaluation, query optimization, data partitioning, and debugging of an Apache Drill instance in a live lab environment.
By the end of this training, participants will be able to:
- Install and configure Apache Drill
- Understand Apache Drill's architecture and features
- Understand how Apache Drills receives and executes queries
- Optimize Drill queries for distributed SQL execution
- Debug Apache Drill
Audience
- Developers
- Systems administrators
- Data analysts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange.
By the end of this training, participants will be able to build producer and consumer applications for real-time stream data procesing.
Audience
- Developers
- Administrators
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn the fundamentals of Apache Drill, then leverage the power and convenience of SQL to interactively query big data across multiple data sources, without writing code. Participants will also learn how to optimize their Drill queries for distributed SQL execution.
By the end of this training, participants will be able to:
- Perform "self-service" exploration on structured and semi-structured data on Hadoop
- Query known as well as unknown data using SQL queries
- Understand how Apache Drills receives and executes queries
- Write SQL queries to analyze different types of data, including structured data in Hive, semi-structured data in HBase or MapR-DB tables, and data saved in files such as Parquet and JSON.
- Use Apache Drill to perform on-the-fly schema discovery, bypassing the need for complex ETL and schema operations
- Integrate Apache Drill with BI (Business Intelligence) tools such as Tableau, Qlikview, MicroStrategy and Excel
Audience
- Data analysts
- Data scientists
- SQL programmers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
In this instructor-led, live training, participants will learn the essentials of MemSQL for development and administration.
By the end of this training, participants will be able to:
- Understand the key concepts and characteristics of MemSQL
- Install, design, maintain, and operate MemSQL
- Optimize schemas in MemSQL
- Improve queries in MemSQL
- Benchmark performance in MemSQL
- Build real-time data applications using MemSQL
Audience
- Developers
- Administrators
- Operation Engineers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
In this instructor-led, live training, participants will learn the fundamentals of Amazon Redshift.
By the end of this training, participants will be able to:
- Install and configure Amazon Redshift
- Load, configure, deploy, query, and visualize data with Amazon Redshift
Audience
- Developers
- IT Professionals
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn how to work with Hadoop, MapReduce, Pig, and Spark using Python as they step through multiple examples and use cases.
By the end of this training, participants will be able to:
- Understand the basic concepts behind Hadoop, MapReduce, Pig, and Spark
- Use Python with Hadoop Distributed File System (HDFS), MapReduce, Pig, and Spark
- Use Snakebite to programmatically access HDFS within Python
- Use mrjob to write MapReduce jobs in Python
- Write Spark programs with Python
- Extend the functionality of pig using Python UDFs
- Manage MapReduce jobs and Pig scripts using Luigi
Audience
- Developers
- IT Professionals
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
By the end of this training, participants will be able to:
- Learn how to use Spark with Python to analyze Big Data.
- Work on exercises that mimic real world cases.
- Use different tools and techniques for big data analysis using PySpark.
In this instructor-led, live training, participants will learn the mindset with which to approach Big Data technologies, assess their impact on existing processes and policies, and implement these technologies for the purpose of identifying criminal activity and preventing crime. Case studies from law enforcement organizations around the world will be examined to gain insights on their adoption approaches, challenges and results.
By the end of this training, participants will be able to:
- Combine Big Data technology with traditional data gathering processes to piece together a story during an investigation
- Implement industrial big data storage and processing solutions for data analysis
- Prepare a proposal for the adoption of the most adequate tools and processes for enabling a data-driven approach to criminal investigation
Audience
- Law Enforcement specialists with a technical background
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
The course consists of 8 modules (4 on day 1, and 4 on day 2)
By the end of this training, participants will be able to:
- Understand how graph data is persisted and traversed.
- Select the best framework for a given task (from graph databases to batch processing frameworks.)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel.
- View real-world big data problems in terms of graphs, processes and traversals.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
By the end of this training, participants will be able to:
- Install and configure Apachi NiFi.
- Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
- Automate dataflows.
- Enable streaming analytics.
- Apply various approaches for data ingestion.
- Transform Big Data and into business insights.
In this instructor-led, live training, participants will learn how to set up a SolrCloud instance on Amazon AWS.
By the end of this training, participants will be able to:
- Understand SolCloud's features and how they compare to those of conventional master-slave clusters
- Configure a SolCloud centralized cluster
- Automate processes such as communicating with shards, adding documents to the shards, etc.
- Use Zookeeper in conjunction with SolrCloud to further automate processes
- Use the interface to manage error reporting
- Load balance a SolrCloud installation
- Configure SolrCloud for continuous processing and fail-over
Audience
- Solr Developers
- Project Managers
- System Administrators
- Search Analysts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
By the end of this training, participants will be able to:
- Understand the architecture and design concepts behind Data Vault 2.0, and its interaction with Big Data, NoSQL and AI.
- Use data vaulting techniques to enable auditing, tracing, and inspection of historical data in a data warehouse.
- Develop a consistent and repeatable ETL (Extract, Transform, Load) process.
- Build and deploy highly scalable and repeatable warehouses.
In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.
By the end of this training, participants will be able to:
- Create, curate, and interactively explore an enterprise data lake
- Access business intelligence data warehouses, transactional databases and other analytic stores
- Use a spreadsheet user-interface to design end-to-end data processing pipelines
- Access pre-built functions to explore complex data relationships
- Use drag-and-drop wizards to visualize data and create dashboards
- Use tables, charts, graphs, and maps to analyze query results
Audience
- Data analysts
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
In this instructor-led, live training (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.
By the end of this training, participants will be able to:
- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
Audience
- Developers
- Software architects
Format of the Course
- Part lecture, part discussion, exercises and heavy hands-on practice
Notes
- To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training, participants will learn how to maximize the features of Pentaho Open Source BI Suite Community Edition (CE).
By the end of this training, participants will be able to:
- Install and configure Pentaho Open Source BI Suite Community Edition (CE)
- Understand the fundamentals of Pentaho CE tools and their features
- Build reports using Pentaho CE
- Integrate third party data into Pentaho CE
- Work with big data and analytics in Pentaho CE
Audience
- Programmers
- BI Developers
Format of the course
- Part lecture, part discussion, exercises and heavy hands-on practice
Note
- To request a customized training for this course, please contact us to arrange.
By the end of this training, participants will be able to:
- Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database.
- Achieve persistence without syncing data back to a relational database.
- Use Ignite to carry out SQL and distributed joins.
- Improve performance by moving data closer to the CPU, using RAM as a storage.
- Spread data sets across a cluster to achieve horizontal scalability.
- Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors.
By the end of this training, participants will be able to:
- Create Spark applications with the Scala programming language.
- Use Spark Streaming to process continuous streams of data.
- Process streams of real-time data with Spark Streaming.
By the end of this training, participants will be able to use Apache Kafka to monitor and manage conditions in continuous data streams using Python programming.
By the end of this training, participants will be able to:
- Install and configure Talend Open Studio for Big Data.
- Connect with Big Data systems such as Cloudera, HortonWorks, MapR, Amazon EMR and Apache.
- Understand and set up Open Studio's big data components and connectors.
- Configure parameters to automatically generate MapReduce code.
- Use Open Studio's drag-and-drop interface to run Hadoop jobs.
- Prototype big data pipelines.
- Automate big data integration projects.