”工欲善其事,必先利其器。“—孔子《论语.录灵公》
首页 > 编程 > Python 中网页抓取的当前问题和错误以及解决它们的技巧!

Python 中网页抓取的当前问题和错误以及解决它们的技巧!

发布于2024-09-04
浏览:496

Introduction

Greetings! I'm Max, a Python developer from Ukraine, a developer with expertise in web scraping, data analysis, and processing.

My journey in web scraping started in 2016 when I was solving lead generation challenges for a small company. Initially, I used off-the-shelf solutions such as Import.io and Kimono Labs. However, I quickly encountered limitations such as blocking, inaccurate data extraction, and performance issues. This led me to learn Python. Those were the glory days when requests and lxml/beautifulsoup were enough to extract data from most websites. And if you knew how to work with threads, you were already a respected expert :)

One of our community members wrote this blog as a contribution to Crawlee Blog. If you want to contribute blogs like these to Crawlee Blog, please reach out to us on our discord channel.

Crawlee & Apify

This is the official developer community of Apify and Crawlee. | 8318 members

Current problems and mistakes of web scraping in Python and tricks to solve them! discord.com

As a freelancer, I've built small solutions and large, complex data mining systems for products over the years.

Today, I want to discuss the realities of web scraping with Python in 2024. We'll look at the mistakes I sometimes see and the problems you'll encounter and offer solutions to some of them.

Let's get started.

Just take requests and beautifulsoup and start making a lot of money...

No, this is not that kind of article.

1. "I got a 200 response from the server, but it's an unreadable character set."

Yes, it can be surprising. But I've seen this message from customers and developers six years ago, four years ago, and in 2024. I read a post on Reddit just a few months ago about this issue.

Let's look at a simple code example. This will work for requests, httpx, and aiohttp with a clean installation and no extensions.

import httpx

url = 'https://www.wayfair.com/'

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0",
    "Accept": "text/html,application/xhtml xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg xml,*/*;q=0.8",
    "Accept-Language": "en-US,en;q=0.5",
    "Accept-Encoding": "gzip, deflate, br, zstd",
    "Connection": "keep-alive",
}

response = httpx.get(url, headers=headers)

print(response.content[:10])

The print result will be similar to:

b'\x83\x0c\x00\x00\xc4\r\x8e4\x82\x8a'

It's not an error - it's a perfectly valid server response. It's encoded somehow.

The answer lies in the Accept-Encoding header. In the example above, I just copied it from my browser, so it lists all the compression methods my browser supports: "gzip, deflate, br, zstd". The Wayfair backend supports compression with "br", which is Brotli, and uses it as the most efficient method.

This can happen if none of the libraries listed above have a Brotli dependency among their standard dependencies. However, they all support decompression from this format if you already have Brotli installed.

Therefore, it's sufficient to install the appropriate library:

pip install Brotli

This will allow you to get the result of the print:

b'

You can obtain the same result for aiohttp and httpx by doing the installation with extensions:

pip install aiohttp[speedups]
pip install httpx[brotli]

By the way, adding the brotli dependency was my first contribution to crawlee-python. They use httpx as the base HTTP client.

You may have also noticed that a new supported data compression format zstd appeared some time ago. I haven't seen any backends that use it yet, but httpx will support decompression in versions above 0.28.0. I already use it to compress server response dumps in my projects; it shows incredible efficiency in asynchronous solutions with aiofiles.

The most common solution to this situation that I've seen is for developers to simply stop using the Accept-Encoding header, thus getting an uncompressed response from the server. Why is that bad? The main page of Wayfair takes about 1 megabyte uncompressed and about 0.165 megabytes compressed.

Therefore, in the absence of this header:

  • You increase the load on your internet bandwidth.
  • If you use a proxy with traffic, you increase the cost of each of your requests.
  • You increase the load on the server's internet bandwidth.
  • You're revealing yourself as a scraper, since any browser uses compression.

But I think the problem is a bit deeper than that. Many web scraping developers simply don't understand what the headers they use do. So if this applies to you, when you're working on your next project, read up on these things; they may surprise you.

2. "I use headers as in an incognito browser, but I get a 403 response". Here's Johnn-... I mean, Cloudflare

Yes, that's right. 2023 brought us not only Large Language Models like ChatGPT but also improved Cloudflare protection.

Those who have been scraping the web for a long time might say, "Well, we've already dealt with DataDome, PerimeterX, InCapsula, and the like."

But Cloudflare has changed the rules of the game. It is one of the largest CDN providers in the world, serving a huge number of sites. Therefore, its services are available to many sites with a fairly low entry barrier. This makes it radically different from the technologies mentioned earlier, which were implemented purposefully when they wanted to protect the site from scraping.

Cloudflare is the reason why, when you start reading another course on "How to do web scraping using requests and beautifulsoup", you can close it immediately. Because there's a big chance that what you learn will simply not work on any "decent" website.

Let's look at another simple code example:

from httpx import Client

client = Client(http2=True)

url = 'https://www.g2.com/'

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0",
    "Accept": "text/html,application/xhtml xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg xml,*/*;q=0.8",
    "Accept-Language": "en-US,en;q=0.5",
    "Accept-Encoding": "gzip, deflate, br, zstd",
    "Connection": "keep-alive",
}

response = client.get(url, headers=headers)

print(response)

Of course, the response would be 403.

What if we use curl?

curl -XGET -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0"' -H 'Accept: text/html,application/xhtml xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg xml,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' -H 'Connection: keep-alive' 'https://www.g2.com/' -s -o /dev/null -w "%{http_code}\n"

Also 403.

Why is this happening?

Because Cloudflare uses TLS fingerprints of many HTTP clients popular among developers, site administrators can also customize how aggressively Cloudflare blocks clients based on these fingerprints.

For curl, we can solve it like this:

curl -XGET -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0"' -H 'Accept: text/html,application/xhtml xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg xml,*/*;q=0.8' -H 'Accept-Language: en-US,en;q=0.5' -H 'Connection: keep-alive' 'https://www.g2.com/' --tlsv1.3 -s -o /dev/null -w "%{http_code}\n"

You might expect me to write here an equally elegant solution for httpx, but no. About six months ago, you could do the "dirty trick" and change the basic httpcore parameters that it passes to h2, which are responsible for the HTTP2 handshake. But now, as I'm writing this article, that doesn't work anymore.

There are different approaches to getting around this. But let's solve it by manipulating TLS.

The bad news is that all the Python clients I know of use the ssl library to handle TLS. And it doesn't give you the ability to manipulate TLS subtly.

The good news is that the Python community is great and implements solutions that exist in other programming languages.

The first way to solve this problem is to use tls-client

This Python wrapper around the Golang library provides an API similar to requests.

pip install tls-client
from tls_client import Session

client = Session(client_identifier="firefox_120")

url = 'https://www.g2.com/'

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0",
    "Accept": "text/html,application/xhtml xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg xml,*/*;q=0.8",
    "Accept-Language": "en-US,en;q=0.5",
    "Accept-Encoding": "gzip, deflate, br, zstd",
    "Connection": "keep-alive",
}

response = client.get(url, headers=headers)

print(response)

The tls_client supports TLS presets for popular browsers, the relevance of which is maintained by developers. To use this, you must pass the necessary client_identifier. However, the library also allows for subtle manual manipulation of TLS.

The second way to solve this problem is to use curl_cffi

This wrapper around the C library patches curl and provides an API similar to requests.

pip install curl_cffi
from curl_cffi import requests

url = 'https://www.g2.com/'

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:128.0) Gecko/20100101 Firefox/128.0",
    "Accept": "text/html,application/xhtml xml,application/xml;q=0.9,image/avif,image/webp,image/png,image/svg xml,*/*;q=0.8",
    "Accept-Language": "en-US,en;q=0.5",
    "Accept-Encoding": "gzip, deflate, br, zstd",
    "Connection": "keep-alive",
}

response = requests.get(url, headers=headers, impersonate="chrome124")

print(response)

curl_cffi also provides TLS presets for some browsers, which are specified via the impersonate parameter. It also provides options for subtle manual manipulation of TLS.

I think someone just said, "They're literally doing the same thing." That's right, and they're both still very raw.

Let's do some simple comparisons:

Feature tls_client curl_cffi
TLS preset
TLS manual
async support -
big company support -
number of contributors -

Obviously, curl_cffi wins in this comparison. But as an active user, I have to say that sometimes there are some pretty strange errors that I'm just unsure how to deal with. And let's be honest, so far, they are both pretty raw.

I think we will soon see other libraries that solve this problem.

One might ask, what about Scrapy? I'll be honest: I don't really keep up with their updates. But I haven't heard about Zyte doing anything to bypass TLS fingerprinting. So out of the box Scrapy will also be blocked, but nothing is stopping you from using curl_cffi in your Scrapy Spider.

3. What about headless browsers and Cloudflare Turnstile?

Yes, sometimes we need to use headless browsers. Although I'll be honest, from my point of view, they are used too often even when clearly not necessary.

Even in a headless situation, the folks at Cloudflare have managed to make life difficult for the average web scraper by creating a monster called Cloudflare Turnstile.

To test different tools, you can use this demo page.

To quickly test whether a library works with the browser, you should start by checking the usual non-headless mode. You don't even need to use automation; just open the site using the desired library and act manually.

What libraries are worth checking out for this?

Candidate #1 Playwright playwright-stealth

It'll be blocked and won't let you solve the captcha.

Playwright is a great library for browser automation. However the developers explicitly state that they don't plan to develop it as a web scraping tool.

And I haven't heard of any Python projects that effectively solve this problem.

Candidate #2 undetected_chromedriver

It'll be blocked and won't let you solve the captcha.

This is a fairly common library for working with headless browsers in Python, and in some cases, it allows bypassing Cloudflare Turnstile. But on the target website, it is blocked. Also, in my projects, I've encountered at least two other cases where Cloudflare blocked undetected_chromedriver.

In general, undetected_chromedriver is a good library for your projects, especially since it uses good old Selenium under the hood.

Candidate #3 botasaurus-driver

It allows you to go past the captcha after clicking.

I don't know how its developers pulled this off, but it works. Its main feature is that it was developed specifically for web scraping. It also has a higher-level library to work with - botasaurus.

On the downside, so far, it's pretty raw, and botasaurus-driver has no documentation and has a rather challenging API to work with.

To summarize, most likely, your main library for headless browsing will be undetected_chromedriver. But in some particularly challenging cases, you might need to use botasaurus.

4. What about frameworks?

High-level frameworks are designed to speed up and ease development by allowing us to focus on business logic, although we often pay the price in flexibility and control.

So, what are the frameworks for web scraping in 2024?

Scrapy

It's impossible to talk about Python web scraping frameworks without mentioning Scrapy. Scrapinghub (now Zyte) first released it in 2008. For 16 years, it has been developed as an open-source library upon which development companies built their business solutions.

Talking about the advantages of Scrapy, you could write a separate article. But I will emphasize the two of them:

  • The huge amount of tutorials that have been released over the years
  • Middleware libraries are written by the community and are extending their functionality. For example, scrapy-playwright.

But what are the downsides?

In recent years, Zyte has been focusing more on developing its own platform. Scrapy mostly gets fixes only.

  • Lack of development towards bypassing anti-scraping systems. You have to implement them yourself, but then, why do you need a framework?
  • Scrapy was originally developed with the asynchronous framework Twisted. Partial support for asyncio was added only in version 2.0. Looking through the source code, you may notice some workarounds that were added for this purpose.

Thus, Scrapy is a good and proven solution for sites that are not protected against web scraping. You will need to develop and add the necessary solutions to the framework in order to bypass anti-scraping measures.

Botasaurus

A new framework for web scraping using browser automation, built on botasaurus-driver. The initial commit was made on May 9, 2023.

Let's start with its advantages:

  • Allows you to bypass any Claudflare protection as well as many others using botasaurus-driver.
  • Good documentation for a quick start

Downsides include:

  • Browser automation only, not intended for HTTP clients.
  • Tight coupling with botasaurus-driver; you can't easily replace it with something better if it comes out in the future.
  • No asynchrony, only multithreading.
  • At the moment, it's quite raw and still requires fixes for stable operation.
  • There are very few training materials available at the moment.

This is a good framework for quickly building a web scraper based on browser automation. It lacks flexibility and support for HTTP clients, which is crutias for users like me.

Crawlee for Python

A new framework for web scraping in the Python ecosystem. The initial commit was made on Jan 10, 2024, with a release in the media space on July 5, 2024.

Current problems and mistakes of web scraping in Python and tricks to solve them! apify / crawlee-python

Crawlee—A web scraping and browser automation library for Python to build reliable crawlers. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with BeautifulSoup, Playwright, and raw HTTP. Both headful and headless mode. With proxy rotation.

Current problems and mistakes of web scraping in Python and tricks to solve them!
A web scraping and browser automation library

Current problems and mistakes of web scraping in Python and tricks to solve them!Current problems and mistakes of web scraping in Python and tricks to solve them!Current problems and mistakes of web scraping in Python and tricks to solve them!Current problems and mistakes of web scraping in Python and tricks to solve them!

Crawlee covers your crawling and scraping end-to-end and helps you build reliable scrapers. Fast.

? Crawlee for Python is open to early adopters!

Your crawlers will appear almost human-like and fly under the radar of modern bot protections even with the default configuration. Crawlee gives you the tools to crawl the web for links, scrape data and persistently store it in machine-readable formats, without having to worry about the technical details. And thanks to rich configuration options, you can tweak almost any aspect of Crawlee to suit your project's needs if the default settings don't cut it.

? View full documentation, guides and examples on the Crawlee project website ?

We also have a TypeScript implementation of the Crawlee, which you can explore and utilize for your projects. Visit our GitHub repository for more information Crawlee for JS/TS on GitHub.

Installation

We…


View on GitHub


Developed by Apify, it is a Python adaptation of their famous JS framework crawlee, first released on Jul 9, 2019.

As this is a completely new solution on the market, it is now in an active design and development stage. The community is also actively involved in its development. So,we can see that the use of curl_cffi is already being discussed. The possibility of creating their own Rust-based client was previously discussed. I hope the company doesn't abandon the idea.

From Crawlee team:
"Yeah, for sure we will keep improving Crawlee for Python for years to come."

As I personally would like to see an HTTP client for Python developed and maintained by a major company. And Rust shows itself very well as a library language for Python. Let's remember at least Ruff and Pydantic v2.

Advantages:

The framework was developed by an established company in the web scraping market, which has well-developed expertise in this sphere.

  • Support for both browser automation and HTTP clients.
  • Fully asynchronous, based on asyncio.
  • Active development phase and media activity. As developers listen to the community, it is quite important in this phase.

On a separate note, it has a pretty good modular architecture. If developers introduce the ability to switch between several HTTP clients, we will get a rather flexible framework that allows us to easily change the technologies used, with a simple implementation from the development team.

Deficiencies:

  • The framework is new. There are very few training materials available at the moment.
  • At the moment, it's quite raw and still requires fixes for stable operation, as well as convenient interfaces for configuration. -There is no implementation of any means of bypassing anti-scraping systems for now other than changing sessions and proxies. But they are being discussed.

I believe that how successful crawlee-python turns out to depends primarily on the community. Due to the small number of tutorials, it is not suitable for beginners. However, experienced developers may decide to try it instead of Scrapy.

In the long run, it may turn out to be a better solution than Scrapy and Botasaurus. It already provides flexible tools for working with HTTP clients, automating browsers out of the box, and quickly switching between them. However, it lacks tools to bypass scraping protections, and their implementation in the future may be the deciding factor in choosing a framework for you.

Conclusion

If you have read all the way to here, I assume you found it interesting and maybe even helpful :)

The industry is changing and offering new challenges, and if you are professionally involved in web scraping, you will have to keep a close eye on the situation. In some other field, you would remain a developer who makes products using outdated technologies. But in modern web scraping, you become a developer who makes web scrapers that simply don't work.

Also, don't forget that you are part of the larger Python community, and your knowledge can be useful in developing tools that make things happen for all of us. As you can see, many of the tools you need are being built literally right now.

I'll be glad to read your comments. Also, if you need a web scraping expert or do you just want to discuss the article, you can find me on the following platforms: Github, Linkedin, Apify, Upwork, Contra.

Thank you for your attention :)

版本声明 本文转载于:https://dev.to/crawlee/current-problems-and-mistakes-of-web-scraping-in-python-and-tricks-to-solve-them-1ogf?1如有侵犯,请联系[email protected]删除
最新教程 更多>
  • 那么 Pull 请求如何再次发挥作用呢?屏显#3
    那么 Pull 请求如何再次发挥作用呢?屏显#3
    在我之前的文章中,我谈到了启动一个基于开源 GenAI 的终端应用程序。本周的任务是为另一个用户的项目贡献一个新功能。由于我们必须与新人合作,所以我与 Lily 合作,她开发了一款应用程序,其代码改进功能与我的类似,只是她的角色是老鼠! 有时间的话可以去看看她的项目老鼠助手。 她的代码是用 Type...
    编程 发布于2024-11-07
  • 为什么 Go 中不能直接将 []string 转换为 []interface{}?
    为什么 Go 中不能直接将 []string 转换为 []interface{}?
    为什么将 []string 转换为 []interface{} 会在 Go 中引发编译错误转换字符串切片 ([]string)考虑到它们共享切片特征以及 []string 的每个元素都可以被视为一个接口,Go 中的接口切片 ([]interface{}) 似乎很简单。然而,尝试这种转换时会出现编译错...
    编程 发布于2024-11-07
  • 理解 Shadow DOM:封装 Web 组件的关键
    理解 Shadow DOM:封装 Web 组件的关键
    在现代 Web 开发中,创建可重用和可维护的组件至关重要。 Shadow DOM 是 Web 组件标准的一部分,在实现这一目标方面发挥着至关重要的作用。本文深入探讨了 Shadow DOM 的概念、它的优点以及如何在您的项目中有效地使用它。 什么是 Shadow DOM? Shado...
    编程 发布于2024-11-07
  • 如何使用 Java 运行时解决输出重定向问题?
    如何使用 Java 运行时解决输出重定向问题?
    使用 Runtime 的 exec() 方法解决输出重定向问题在 Java 中,利用 Runtime.getRuntime().exec() 运行命令可以捕获进程的输出和错误流。但是,在需要输出重定向的情况下,单独使用此方法可能无效。问题:输出未重定向当使用 Runtime.getRuntime()...
    编程 发布于2024-11-07
  • 如何使用 CSS 悬停效果从左到右填充背景颜色?
    如何使用 CSS 悬停效果从左到右填充背景颜色?
    使用 CSS 从左到右填充背景颜色在 CSS 中,您可以通过利用线性渐变和动画背景定位来创建迷人的悬停效果。这种方法使您能够在悬停时从左到右用新颜色填充元素的背景。线性渐变和背景大小关键是使用由两种颜色组成的线性渐变背景,并将背景大小设置为元素宽度的两倍。这允许您在两种颜色之间创建无缝过渡。背景定位...
    编程 发布于2024-11-07
  • GraalVM 本机映像中的内存管理
    GraalVM 本机映像中的内存管理
    内存管理是计算机软件开发的重要组成部分,负责应用程序中内存的有效分配、利用和释放。其重要性在于增强软件性能,保证系统稳定性。 垃圾收集 垃圾收集 (GC) 在 Java 和 Go 等当代编程语言中至关重要。它自动检测并回收未使用的内存,从而减轻开发人员手动管理内存的需要。 GC 的概...
    编程 发布于2024-11-07
  • ## 在 C++ 中什么时候应该使用引用作为函数参数?
    ## 在 C++ 中什么时候应该使用引用作为函数参数?
    在 C 中传递参数:理解引用在 C 中,函数参数的行为由其类型决定。一个重要的区别是“按值传递”和“按引用传递”。为什么在函数参数中使用引用?引用在函数参数中用于两种情况主要原因:修改参数: 引用允许函数修改传递的参数的值。这意味着该函数可以进行调用者可见的更改。避免对象复制: 通过引用传递大对象可...
    编程 发布于2024-11-07
  • 如何在单个命令行中运行多行命令?
    如何在单个命令行中运行多行命令?
    如何在一行命令行中执行多行语句使用Python的-c选项执行单行循环时,在循环之前导入模块会导致语法错误。这是因为Python解释器将代码块视为单个语句。要解决此问题,可以采用以下几种方法:使用管道要克服语法错误,请使用 echo 命令将代码块作为一系列输入行重定向到 Python:echo -e ...
    编程 发布于2024-11-07
  • 如何在 PHP 中从 MySQL 迁移到 MySQLi?
    如何在 PHP 中从 MySQL 迁移到 MySQLi?
    从 MySQL 迁移到 MySQLi将网站从 MySQL 迁移到 MySQLi 需要修改 PHP 代码,但数据库本身基本上不受影响。 MySQLi 是 MySQL 扩展的改进版本,提供增强的功能和安全性。PHP 代码更改是的,您可以简单地将 MySQLi 函数替换为 MySQL 函数。这里有一个快速...
    编程 发布于2024-11-07
  • 如何在CSS中实现背景和子元素的不同透明度?
    如何在CSS中实现背景和子元素的不同透明度?
    理解 CSS 背景不透明度在 CSS 中,不透明度控制元素的透明度。当应用于容器时,它自然会影响背景及其子元素。继承问题要实现背景和子元素不同的不透明度, CSS 继承带来了挑战。子元素从其父容器继承不透明度,从而导致所提供示例中的背景和文本具有相同的不透明度。实现所需不透明度的解决方案实现要达到所...
    编程 发布于2024-11-07
  • 【个人网站】Next如何集成Notion数据库
    【个人网站】Next如何集成Notion数据库
    To integrate a Notion database into a Next.js project, you can use Notion as a content management system (CMS) and display its content on your website...
    编程 发布于2024-11-07
  • 为什么 PhpMyAdmin 在 Ubuntu 12.04 上给出“MySQLi 扩展缺失”错误?
    为什么 PhpMyAdmin 在 Ubuntu 12.04 上给出“MySQLi 扩展缺失”错误?
    PhpMyAdmin 错误:MySQLi 扩展缺失在 Ubuntu 12.04 上遇到 PhpMyAdmin 问题?尽管安装了 Apache2、PHP5、MySQL 和 PhpMyAdmin,您还是遇到了“mysqli 扩展丢失”错误。尽管您已取消注释 php.ini 中的“extension=my...
    编程 发布于2024-11-07
  • 如何使用 java.net.URLConnection 将文件和附加参数上传到 HTTP 服务器?
    如何使用 java.net.URLConnection 将文件和附加参数上传到 HTTP 服务器?
    在 Java 中使用附加参数将文件上传到 HTTP 服务器将文件上传到 HTTP 服务器是许多应用程序的常见需求。但是,有时还需要随文件一起传递附加参数。这是一个允许您在不使用外部库的情况下发送文件和参数的解决方案:java.net.URLConnection 和 Multipart/Form-Da...
    编程 发布于2024-11-07
  • 如何在 PHP 中逐行读取和处理文本文件?
    如何在 PHP 中逐行读取和处理文本文件?
    在 PHP 中读取文本文件:分步指南许多 Web 开发场景都涉及从文本文件读取数据。在 PHP 中,文件处理函数提供了逐行读取纯文本文件的便捷方法。让我们分解一下使用 PHP 读取文本文件的过程。读取文本文件的代码:以下 PHP 代码片段演示了如何读取文本文件并逐行处理其内容:<?php //...
    编程 发布于2024-11-07
  • 我离不开的生产力工具(奖励)
    我离不开的生产力工具(奖励)
    大家好,你们的孩子 Nomadev 带着另一篇帖子回来了!今天,我很高兴与大家分享一些我每天使用的超级酷的人工智能工具。这些工具已成为我日常工作的重要组成部分,帮助我保持井井有条、高效并完成更多工作。 在当今快节奏的世界中,我们都希望提高生产力和效率。借助人工智能,有大量工具可以帮助我们管理任务、简...
    编程 发布于2024-11-07

免责声明: 提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发到邮箱:[email protected] 我们会第一时间内为您处理。

Copyright© 2022 湘ICP备2022001581号-3