ModelDock.run
Hosted dbt-core scheduler. Connect your GitHub repo, pick your dbt-core version and warehouse adapter, set a cron schedule — done. Supports PostgreSQL, Snowflake, BigQuery, Redshift, Databricks, and Fabric. Free tier available.
Cost / License
- Free
- Proprietary
Platforms
- Online
Features
- Data analytics
ModelDock.run News & Activities
Recent activities
- andrecrash3r-pt added ModelDock.run
andrecrash3r-pt added ModelDock.run as alternative to dbt (Data Build Tool) and nao
ModelDock.run information
What is ModelDock.run?
I built ModelDock.run because I wanted a simple way to run dbt-core projects on a schedule without managing infrastructure.
ModelDock.run is a hosted dbt-core scheduler. Connect your GitHub repo, pick your dbt-core version and warehouse adapter, set a cron schedule, and you're done. Your dbt-core project runs on schedule in an isolated Docker container. You get run history, lineage, logs, real-time code browser, artifacts, advanced model execution, monitoring and alerts through a clean web UI.
How it works:
- You create a project and point it at your GitHub repo
- The system dynamically generates an Airflow DAG from your config
- On schedule, it clones your repo, generates profiles.yml from your (AES-256-GCM encrypted) credentials, runs dbt-core in an isolated Docker container, and stores artifacts
- Each run is fully isolated — no shared state between executions
Stack: Next.js, Airflow, PostgreSQL, Docker, Caddy (HTTPS), fail2ban (brute-force protection), Sentry (error tracking), Umami for analytics. The whole thing runs inside Docker Compose on a single environment.
Supported adapters (more to come): PostgreSQL, Snowflake, BigQuery, Redshift, Databricks and Fabric Lakehouse & Warehouse (more to come) — across dbt-core versions 1.8 through 1.11 (where applicable). Docker images are built on-demand per adapter/version combo and cached for reuse.
Why I built this:
I'm a Data Engineer & Architect consultant with 20+ years in the domain and Linux+infrastructure. I use dbt-core daily and love it, but I got tired of choosing between managing my own Airflow/Docker setup or paying enterprise prices just to run a CLI tool on a schedule. So I built the middle ground — the simplest possible path from "I have a dbt-core repo" to "it runs in production."
The target audience are data teams and analytics engineers who know dbt-core well and just need somewhere reliable to run it.
Currently in free open beta — looking for feedback from real dbt-core users. What's missing? Which other connector are important to you? What would make this useful to you? What's holding you back?
