get-set, Fetch! icon
get-set, Fetch! icon

get-set, Fetch!

 Like

Nodejs web scraper. Use it from your own code, via command line or docker container. Supports multiple storage options: SQLite, MySQL, PostgreSQL. Supports multiple browser or dom-like clients: Puppeteer, Playwright, Cheerio, JSdom.

get-set, Fetch! screenshot 1

License model

  • FreeOpen Source

Application type

Platforms

  • Mac
  • Windows
  • Linux
  No rating
0likes
0comments
0news articles

Features

Suggest and vote on features
  1.  Command line interface
  2. Docker icon  Support for Docker

 Tags

get-set, Fetch! News & Activities

Highlights All activities

Recent activities

Show all activities

get-set, Fetch! information

  • Developed by

    Andrei Sabau
  • Licensing

    Open Source (MIT) and Free product.
  • Written in

  • Alternatives

    25 alternatives listed
  • Supported Languages

    • English

AlternativeTo Category

OS & Utilities

GitHub repository

  •  114 Stars
  •  18 Forks
  •  12 Open Issues
  •   Updated Mar 13, 2023 
View on GitHub

Popular alternatives

View all

Our users have written 0 comments and reviews about get-set, Fetch!, and it has gotten 0 likes

get-set, Fetch! was added to AlternativeTo by asabau on Dec 6, 2021 and this page was last updated Dec 6, 2021.
No comments or reviews, maybe you want to be first?
Post comment/review

What is get-set, Fetch!?

get-set, Fetch! is a plugin based, nodejs web scraper. It scrapes, stores and exports data. At its core, an ordered list of plugins (default or custom defined) is executed against each to be scraped web resource.

Supports multiple storage options: SQLite, MySQL, PostgreSQL. Supports multiple browser or dom-like clients: Puppeteer, Playwright, Cheerio, Jsdom.

For quick, small projects under 10K URLs storing the queue and scraped content under SQLite is fine. For anything larger use PostgreSQL. You will be able to start/stop/resume the scraping process across multiple scraper instances each with its own IP and/or dedicated proxies. Using PostgreSQL it takes 90 minutes to scrape 1 million URLs with a concurrency of 100 parallel scrape actions. That's 5.5ms per scraped URL.

Official Links