Luigi icon
Luigi icon

Luigi

Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.

Luigi screenshot 1

Cost / License

  • Free
  • Open Source

Platforms

  • Linux
-
No reviews
0likes
0comments
0news articles

Features

Suggest and vote on features
No features, maybe you want to suggest one?

 Tags

  • etl

Luigi News & Activities

Highlights All activities

Recent activities

  • TaskYak icon
    yaksoft added Luigi as alternative to TaskYak
Show all activities

Luigi information

  • Developed by

    SE flagSpotify
  • Licensing

    Open Source and Free product.
  • Alternatives

    14 alternatives listed
  • Supported Languages

    • English
Luigi was added to AlternativeTo by thomasleveil on and this page was last updated .
No comments or reviews, maybe you want to be first?
Post comment/review

What is Luigi?

The purpose of Luigi is to address all the plumbing typically associated with long-running batch processes. You want to chain many tasks, automate them, and failures will happen. These tasks can be anything, but are typically long running things like Hadoop jobs, dumping data to/from databases, running machine learning algorithms, or anything else.

There are other software packages that focus on lower level aspects of data processing, like Hive, Pig, or Cascading. Luigi is not a framework to replace these. Instead it helps you stitch many tasks together, where each task can be a Hive query, a Hadoop job in Java, a Spark job in Scala or Python a Python snippet, dumping a table from a database, or anything else. It's easy to build up long-running pipelines that comprise thousands of tasks and take days or weeks to complete. Luigi takes care of a lot of the workflow management so that you can focus on the tasks themselves and their dependencies.

You can build pretty much any task you want, but Luigi also comes with a toolbox of several common task templates that you use. It includes support for running Python mapreduce jobs in Hadoop, as well as Hive, and Pig, jobs. It also comes with file system abstractions for HDFS, and local files that ensures all file system operations are atomic. This is important because it means your data pipeline will not crash in a state containing partial data.