Conversation
Summary of ChangesHello @carliblaz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request adds a powerful new 'Deep Web Crawler' document loader to the system. This new component is designed to efficiently traverse websites, extract relevant textual content, and prepare it for use in Langchain applications. It provides comprehensive control over the crawling process, from navigation strategy to content refinement, ensuring high-quality data acquisition from diverse web sources. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a powerful new Deep Web Crawler document loader. The implementation is robust, leveraging crawlee for crawling and @mozilla/readability for content extraction, with well-structured options for sitemap and link-based crawling. My review identifies a couple of high-severity issues related to inconsistent default parameter values that could lead to unexpected user experiences. Additionally, I've pointed out several instances of unused code (dead code) that should be removed to improve the maintainability and clarity of the new core.ts file. Overall, this is a great addition, and addressing these points will make it even better.
packages/components/nodes/documentloaders/DeepWebCrawler/DeepWebCrawler.ts
Outdated
Show resolved
Hide resolved
packages/components/nodes/documentloaders/DeepWebCrawler/DeepWebCrawler.ts
Outdated
Show resolved
Hide resolved
packages/components/nodes/documentloaders/DeepWebCrawler/core.ts
Outdated
Show resolved
Hide resolved
packages/components/nodes/documentloaders/DeepWebCrawler/core.ts
Outdated
Show resolved
Hide resolved
packages/components/nodes/documentloaders/DeepWebCrawler/core.ts
Outdated
Show resolved
Hide resolved
packages/components/nodes/documentloaders/DeepWebCrawler/core.ts
Outdated
Show resolved
Hide resolved
|
thanks! quick question: how is it different from other crawlers we have in the doc loader? |
The bigest change is that it removes common parts. Why is this important? Because other crawlers get links from first page, then crawl next page, where they get the same links as on first page, and so one ... every page links are almost the same, links in header. |
No description provided.