Cursos de Apache Spark

Cursos de Apache Spark

Apache Spark - para treinamento de processamento de big data. Os cursos de treinamento Apache Spark ao vivo em locais e instruídos demonstram, por meio da prática prática, como o Spark se encaixa no ecossistema Big Data e como usar o Spark para análise de dados. O treinamento do Apache Spark está disponível em vários formatos, incluindo treinamento ao vivo no local e treinamento online ao vivo e interativo. O treinamento ao vivo no local pode ser realizado nas instalações do cliente no Brasil ou nos centros de treinamento locais NobleProg no Brasil. O treinamento ao vivo remoto é realizado por meio de uma área de trabalho remota e interativa.



NobleProg -- Seu Provedor de Treinamento Local

Declaração de Clientes

★★★★★
★★★★★

Nossos Clientes

Subcategorias Spark

Programas do curso Spark

Nome do Curso
Duração
Visão geral
Nome do Curso
Duração
Visão geral
21 horas
Visão geral
Este é um curso de introdução ao Apache Spark, os participantes aprenderão como é que esse programa participa do ecossistema Big Data, e como utiliza-lo para analizar dados. O curso cobre Spark para analise de dados, internalidades do Spark, Spark APIs, Spark SQL, Spark Streaming, Machine Learning e graphX.
21 horas
Visão geral
This instructor-led, live training in (online or onsite) introduces Hortonworks Data Platform (HDP) and walks participants through the deployment of Spark + Hadoop solution.

By the end of this training, participants will be able to:

- Use Hortonworks to reliably run Hadoop at a large scale.
- Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
- Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project.
- Process different types of data, including structured, unstructured, in-motion, and at-rest.
14 horas
Visão geral
Magellan is an open-source distributed execution engine for geospatial analytics on big data. Implemented on top of Apache Spark, it extends Spark SQL and provides a relational abstraction for geospatial analytics.

This instructor-led, live training introduces the concepts and approaches for implementing geospacial analytics and walks participants through the creation of a predictive analysis application using Magellan on Spark.

By the end of this training, participants will be able to:

- Efficiently query, parse and join geospatial datasets at scale
- Implement geospatial data in business intelligence and predictive analytics applications
- Use spatial context to extend the capabilities of mobile devices, sensors, logs, and wearables

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
7 horas
Visão geral
Alluxio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba.

In this instructor-led, live training, participants will learn how to use Alluxio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.

By the end of this training, participants will be able to:

- Develop an application with Alluxio
- Connect big data systems and applications while preserving one namespace
- Efficiently extract value from big data in any storage format
- Improve workload performance
- Deploy and manage Alluxio standalone or clustered

Audience

- Data scientist
- Developer
- System administrator

Format of the course

- Part lecture, part discussion, exercises and heavy hands-on practice
7 horas
Visão geral
Spark SQL is Apache Spark's module for working with structured and unstructured data. Spark SQL provides information about the structure of the data as well as the computation being performed. This information can be used to perform optimizations. Two common uses for Spark SQL are:
- to execute SQL queries.
- to read data from an existing Hive installation.

In this instructor-led, live training (onsite or remote), participants will learn how to analyze various types of data sets using Spark SQL.

By the end of this training, participants will be able to:

- Install and configure Spark SQL.
- Perform data analysis using Spark SQL.
- Query data sets in different formats.
- Visualize data and query results.

Format of the Course

- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.

Course Customization Options

- To request a customized training for this course, please contact us to arrange.
21 horas
Visão geral
In this instructor-led, live training in Brasil (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.

By the end of this training, participants will be able to:

- Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
- Understand and select the most appropriate framework for the job.
- Process of data continuously, concurrently, and in a record-by-record fashion.
- Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
- Integrate the most appropriate stream processing library with enterprise applications and microservices.
21 horas
Visão geral
Big data analytics involves the process of examining large amounts of varied data sets in order to uncover correlations, hidden patterns, and other useful insights.

The health industry has massive amounts of complex heterogeneous medical and clinical data. Applying big data analytics on health data presents huge potential in deriving insights for improving delivery of healthcare. However, the enormity of these datasets poses great challenges in analyses and practical applications to a clinical environment.

In this instructor-led, live training (remote), participants will learn how to perform big data analytics in health as they step through a series of hands-on live-lab exercises.

By the end of this training, participants will be able to:

- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to deal with medical data
- Study big data systems and algorithms in the context of health applications

Audience

- Developers
- Data Scientists

Format of the Course

- Part lecture, part discussion, exercises and heavy hands-on practice.

Note

- To request a customized training for this course, please contact us to arrange.
21 horas
Visão geral
Apache Spark's learning curve is slowly increasing at the begining, it needs a lot of effort to get the first return. This course aims to jump through the first tough part. After taking this course the participants will understand the basics of Apache Spark , they will clearly differentiate RDD from DataFrame, they will learn Python and Scala API, they will understand executors and tasks, etc. Also following the best practices, this course strongly focuses on cloud deployment, Databricks and AWS. The students will also understand the differences between AWS EMR and AWS Glue, one of the lastest Spark service of AWS.

AUDIENCE:

Data Engineer, DevOps, Data Scientist
21 horas
Visão geral
This instructor-led, live training in Brasil (online or onsite) is aimed at software engineers who wish to stream big data with Spark Streaming and Scala.

By the end of this training, participants will be able to:

- Create Spark applications with the Scala programming language.
- Use Spark Streaming to process continuous streams of data.
- Process streams of real-time data with Spark Streaming.
14 horas
Visão geral
This instructor-led, live training in Brasil (online or onsite) is aimed at data scientists who wish to use the SMACK stack to build data processing platforms for big data solutions.

By the end of this training, participants will be able to:

- Implement a data pipeline architecture for processing big data.
- Develop a cluster infrastructure with Apache Mesos and Docker.
- Analyze data with Spark and Scala.
- Manage unstructured data with Apache Cassandra.
21 horas
Visão geral
This instructor-led, live training in Brasil (online or onsite) is aimed at engineers who wish to set up and deploy Apache Spark system for processing very large amounts of data.

By the end of this training, participants will be able to:

- Install and configure Apache Spark.
- Quickly process and analyze very large data sets.
- Understand the difference between Apache Spark and Hadoop MapReduce and when to use which.
- Integrate Apache Spark with other machine learning tools.
21 horas
Visão geral
This instructor-led, live training in Brasil (online or onsite) is aimed at developers who wish to carry out big data analysis using Apache Spark in their .NET applications.

By the end of this training, participants will be able to:

- Install and configure Apache Spark.
- Understand how .NET implements Spark APIs so that they can be accessed from a .NET application.
- Develop data processing applications using C# or F#, capable of handling data sets whose size is measured in terabytes and pedabytes.
- Develop machine learning features for a .NET application using Apache Spark capabilities.
- Carry out exploratory analysis using SQL queries on big data sets.
35 horas
Visão geral
MLlib is Spark’s machine learning (ML) library. Its goal is to make practical machine learning scalable and easy. It consists of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as lower-level optimization primitives and higher-level pipeline APIs.

It divides into two packages:

-

spark.mllib contains the original API built on top of RDDs.

-

spark.ml provides higher-level API built on top of DataFrames for constructing ML pipelines.

Audience

This course is directed at engineers and developers seeking to utilize a built in Machine Library for Apache Spark
21 horas
Visão geral
This course is aimed at developers and data scientists who wish to understand and implement AI within their applications. Special focus is given to Data Analysis, Distributed AI and NLP.
28 horas
Visão geral
In this instructor-led, live training in Brasil, participants will learn about the technology offerings and implementation approaches for processing graph data. The aim is to identify real-world objects, their characteristics and relationships, then model these relationships and process them as data using a Graph Computing (also known as Graph Analytics) approach. We start with a broad overview and narrow in on specific tools as we step through a series of case studies, hands-on exercises and live deployments.

By the end of this training, participants will be able to:

- Understand how graph data is persisted and traversed.
- Select the best framework for a given task (from graph databases to batch processing frameworks.)
- Implement Hadoop, Spark, GraphX and Pregel to carry out graph computing across many machines in parallel.
- View real-world big data problems in terms of graphs, processes and traversals.
21 horas
Visão geral
Python é uma linguagem de programação de alto nível famosa por sua sintaxe clara e com legibilidade de código. O Spark é um mecanismo de processamento de dados usado na consulta, análise e transformação de big data. O PySpark permite que os usuários façam interface com o Spark com o Python.

Neste treinamento os participantes aprenderão como usar o Python e o Spark juntos para analisar big data enquanto trabalham em exercícios práticos.

No final deste treinamento, os participantes serão capazes de:

- Aprender a usar o Spark com Python para analisar Big Data
- Trabalhar em exercícios que imitam as circunstâncias do mundo real
- Usar diferentes ferramentas e técnicas para análise de big data usando o PySpark

Público

Desenvolvedores
Profissionais de TI
Cientistas de dados
Formato do curso

Palestra, discussão, exercícios e prática

Próximos Cursos de Spark

Cursos de fim de semana de Apache Spark, Treinamento tardiurno de Apache Spark, Treinamento em grupo de Spark, Spark guiado por instrutor, Treinamento de Apache Spark de fim de semana, Cursos de Spark tardiurnos, coaching de Apache Spark, Instrutor de Spark, Treinador de Apache Spark, Cursos de treinamento de Spark, Aulas de Spark, Apache Spark no local do cliente, Cursos privados de Apache Spark, Treinamento individual de Spark

Descontos em Cursos

Boletim Informativo de Descontos

Nós respeitamos a privacidade dos seus dados. Nós não vamos repassar ou vender o seu email para outras empresas.
Você sempre poderá editar as suas preferências ou cancelar a sua inscriçāo.

is growing fast!

We are looking to expand our presence in Brazil!

As a Business Development Manager you will:

  • expand business in Brazil
  • recruit local talent (sales, agents, trainers, consultants)
  • recruit local trainers and consultants

We offer:

  • Artificial Intelligence and Big Data systems to support your local operation
  • high-tech automation
  • continuously upgraded course catalogue and content
  • good fun in international team

If you are interested in running a high-tech, high-quality training and consulting business.

Apply now!

This site in other countries/regions