Skip to content

Deep web crawler document loader#5696

Open
carliblaz wants to merge 11 commits intoFlowiseAI:mainfrom
carliblaz:DeepWebCrawler
Open

Deep web crawler document loader#5696
carliblaz wants to merge 11 commits intoFlowiseAI:mainfrom
carliblaz:DeepWebCrawler

Conversation

@carliblaz
Copy link

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @carliblaz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request adds a powerful new 'Deep Web Crawler' document loader to the system. This new component is designed to efficiently traverse websites, extract relevant textual content, and prepare it for use in Langchain applications. It provides comprehensive control over the crawling process, from navigation strategy to content refinement, ensuring high-quality data acquisition from diverse web sources.

Highlights

  • New Document Loader: Introduces a new 'Deep Web Crawler' document loader node, designed for comprehensive website data ingestion.
  • Flexible Crawling Modes: Supports various crawling strategies including crawling links discovered from HTML, discovering pages from sitemaps, or using both methods.
  • Advanced Configuration: Offers extensive configuration options such as rendering JavaScript (using Chromium/Puppeteer), setting maximum pages and depth, filtering URLs with include/exclude regex, and stripping unwanted HTML elements using CSS selectors.
  • Intelligent Content Cleanup: Includes features to remove repeated boilerplate blocks across pages (e.g., footers, navigation) and filters out asset URLs to focus on textual content.
  • Politeness and Performance: Implements respect for robots.txt and allows fine-tuning of crawling parameters like concurrency, delay between requests, and request timeouts.
  • New Dependencies: Adds @mozilla/readability for text extraction, crawlee for robust crawling, and robots-parser for robots.txt compliance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a powerful new Deep Web Crawler document loader. The implementation is robust, leveraging crawlee for crawling and @mozilla/readability for content extraction, with well-structured options for sitemap and link-based crawling. My review identifies a couple of high-severity issues related to inconsistent default parameter values that could lead to unexpected user experiences. Additionally, I've pointed out several instances of unused code (dead code) that should be removed to improve the maintainability and clarity of the new core.ts file. Overall, this is a great addition, and addressing these points will make it even better.

@carliblaz carliblaz marked this pull request as ready for review February 2, 2026 14:37
@carliblaz carliblaz changed the title WIP: Deep web crawler document loader Deep web crawler document loader Feb 2, 2026
@HenryHengZJ
Copy link
Contributor

thanks! quick question: how is it different from other crawlers we have in the doc loader?

@carliblaz
Copy link
Author

  1. it respect robots.txt
  2. removes common parts
  3. have setting for deepth
  4. have option for concurency

The bigest change is that it removes common parts. Why is this important? Because other crawlers get links from first page, then crawl next page, where they get the same links as on first page, and so one ... every page links are almost the same, links in header.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants