The Apache Hop team released Apache Hop 2.3.0 yesterday.
Enterprise data orchestration
by the original creators of Apache Hop®
Apache Hop empowers data orchestration at any scale, anywhere
Everything you need to run Apache Hop in production
Run Apache Hop at its full potential with the support and know-how from experts since day one.
Custom Hop
Do you need a new or customized plugin for Apache Hop that isn't on the community's radar?
We build and customize plugins for you, even entirely customized Hop builds are an option.
Training & Coaching
Hop is a broad and deep platform that takes knowledge and experience to deliver ideal results.
Our classroom and e-learning platforms get you up to date on knowledge. In a series of coaching and audit sessions, we learn data teams how to work according to best practices for optimal results.
Professional Support
Your mission-critical Apache Hop project can't be at risk. All systems fail sooner or later, our support desk is ready to help when that happens.
We take support seriously: we want to support you along your entire project's life cycle, with training, coaching, and audit sessions included.
Apache Hop Focus Areas
Apache Hop is a perfect choice for all your data integration and data orchestration needs, but especially excels in a number of architectures and use cases.
Apache Hop and Neo4j
Graphs are taking the world by storm with market leader Neo4j at the forefront. Graph data models and queries are extremely powerful and open up entirely new worlds of analyzing your data. However, reliable, repeatable, and scalable graph data loading often is not a trivial task.
Apache Hop's unparalleled graph data loading and querying functionalities put Hop in pole position for all your graph data loading needs.
Run Hop pipelines on Apache Beam
Large-scale distributed data processing platforms like Apache Spark, Apache Flink, and Google Dataflow make processing huge amounts of data possible. Apache Beam offers a unified programming model to define ETL, batch and stream pipelines on any of these platforms.
Where Apache Beam makes creating a unified code base on all major distributed platforms possible, Apache Hop makes it easy. Hop's visual pipeline development, unit testing and project life cycle support allow data project teams to write pipelines once and run them where it makes most sense.
Upgrade from Pentaho (Kettle)
Apache Hop started its journey in 2019 as a fork of Kettle, the open source project behind Pentaho Data Integration.
Apache Hop and Kettle are different, independent and incompatible projects, this shared history allows Hop to import Kettle projects.
This upgrade not only converts Kettle jobs and transformations to Hop workflows and pipelines, but also gives access to
- tons of new functionality including integration with Neo4j and tens of other platforms
- projects and environments
- run configurations like Apache Spark, Apache Flink and Google Cloud Dataflow through Apache Beam
- project life cycle management through integrated version control, unit testing and CI/CD integration
Latest posts
Check our blog for the latest news in Apache Hop, release previews and behind-the-scenes information
Why use Dataflow?
Google Cloud Dataflow is a Unified stream and batch data processing engine that...
Apache Hop 2.2.0 is available!
The Apache Hop community just released Apache Hop 2.2.0, the fifth (!!) and final release of 2022,
Contact us
Leave your contact details here and we'll be in touch.