You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+28-6Lines changed: 28 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,14 +6,18 @@ This project provides examples how to process the Common Crawl dataset with [Apa
6
6
7
7
+[count HTML tags](./html_tag_count.py) in Common Crawl's raw response data (WARC files)
8
8
9
-
+[count web server names](./server_count.py) in Common Crawl's metadata (WAT files or WARC files)
9
+
+[count web server names](./server_count.py) in Common Crawl's metadata (HTTP headers in WAT or WARC files)
10
10
11
11
+ list host names and corresponding [IP addresses](./server_ip_address.py) (WAT files or WARC files)
12
12
13
13
+[word count](./word_count.py) (term and document frequency) in Common Crawl's extracted text (WET files)
14
14
15
+
+[md5sum](./md5sum.py) Run an external command (`md5sum`) on a list of files from a manifest – WARC, WET, WAT, or any other type of file.
16
+
15
17
+[extract links](./wat_extract_links.py) from WAT files and [construct the (host-level) web graph](./hostlinks_to_graph.py) – for further details about the web graphs see the project [cc-webgraph](https://github.com/commoncrawl/cc-webgraph)
16
18
19
+
+[WET extractor](./wet_extractor.py), using FastWARC and Resiliparse. See also [Using FastWARC](#using-fastwarc-to-read-warc-files).
20
+
17
21
+ work with the [columnar URL index](https://commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/) (see also [cc-index-table](https://github.com/commoncrawl/cc-index-table) and the notes about [querying the columnar index](#querying-the-columnar-index)):
18
22
19
23
- run a SQL query and [export the result as a table](./cc_index_export.py)
@@ -65,7 +69,7 @@ This will install v3.5.7 of [the PySpark python package](https://spark.apache.or
65
69
66
70
Install Spark if (see the [Spark documentation](https://spark.apache.org/docs/latest/) for guidance). Then, ensure that `spark-submit` and `pyspark` are on your `$PATH`, or prepend `$SPARK_HOME/bin` when running eg `$SPARK_HOME/bin/spark-submit`.
67
71
68
-
> Note: The PySpark package is required if you want to run the tests in `test/`.
72
+
> Note: The PySpark package and "py4j" are required if you want to run the tests in `test/`. The packages are also included in Spark installations at `$SPARK_HOME/python` resp. `$SPARK_HOME/python/lib/py4j-*-src.zip`.
69
73
70
74
## Compatibility and Requirements
71
75
@@ -155,7 +159,10 @@ As the Common Crawl dataset lives in the Amazon Public Datasets program, you can
155
159
156
160
3. don't forget to deploy all dependencies in the cluster, see [advanced dependency management](https://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management)
157
161
158
-
4. also the the file `sparkcc.py` needs to be deployed or added as argument `--py-files sparkcc.py` to `spark-submit`. Note: some of the examples require further Python files as dependencies.
162
+
4. also the the file `sparkcc.py` needs to be deployed or added as argument `--py-files sparkcc.py` to `spark-submit`. Note: several of the examples require further Python files as dependencies.
163
+
164
+
165
+
The script [run_ccpyspark_job_hadoop.sh](./run_ccpyspark_job_hadoop.sh) shows an example how to run a Spark job on a Hadoop cluster (Spark on YARN). Please, do not forget to adapt this script to your needs.
159
166
160
167
161
168
### Command-line options
@@ -206,7 +213,7 @@ Querying the columnar index using cc-pyspark requires authenticated S3 access. T
206
213
207
214
#### Installation of S3 Support Libraries
208
215
209
-
While WARC/WAT/WET files are read using boto3, accessing the [columnar URL index](https://commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/) (see option `--query` of CCIndexSparkJob) is done directly by the SparkSQL engine and requires that S3 support libraries are available. These libs are usually provided when the Spark job is run on a Hadoop cluster running on AWS (eg. EMR). However, they may not be provided for any Spark distribution and are usually absent when running Spark locally (not in a Hadoop cluster). In these situations, the easiest way is to add the libs as required packages by adding `--packages org.apache.hadoop:hadoop-aws:3.2.1` to the arguments of `spark-submit`. This will make [Spark manage the dependencies](https://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management) - the hadoop-aws package and transitive dependencies are downloaded as Maven dependencies. Note that the required version of hadoop-aws package depends on the Hadoop version bundled with your Spark installation, e.g., Spark 3.2.1 bundled with Hadoop 3.2 ([spark-3.2.1-bin-hadoop3.2.tgz](https://archive.apache.org/dist/spark/spark-3.2.1/spark-3.2.1-bin-hadoop3.2.tgz)).
216
+
While WARC/WAT/WET files are read using boto3, accessing the [columnar URL index](https://commoncrawl.org/2018/03/index-to-warc-files-and-urls-in-columnar-format/) (see option `--query` of CCIndexSparkJob) is done directly by the SparkSQL engine and requires that S3 support libraries are available. These libs are usually provided when the Spark job is run on a Hadoop cluster running on AWS (eg. EMR). However, they may not be provided for any Spark distribution and are usually absent when running Spark locally (not in a Hadoop cluster). In these situations, the easiest way is to add the libs as required packages by adding `--packages org.apache.hadoop:hadoop-aws:3.3.4` to the arguments of `spark-submit`. This will make [Spark manage the dependencies](https://spark.apache.org/docs/latest/submitting-applications.html#advanced-dependency-management) - the hadoop-aws package and transitive dependencies are downloaded as Maven dependencies. Note that the required version of hadoop-aws package depends on the Hadoop version bundled with your Spark installation, e.g., Spark 3.5.6 bundled with Hadoop 3.3.4 ([spark-3.5.6-bin-hadoop3.tgz](https://archive.apache.org/dist/spark/spark-3.5.6/spark-3.5.6-bin-hadoop3.tgz)). Please check your Spark package and the underlying Hadoop installation for the correct version.
210
217
211
218
Please also note that:
212
219
- the schema of the URL referencing the columnar index depends on the actual S3 file system implementation: it's `s3://` on EMR but `s3a://` when using [s3a](https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Introducing_the_Hadoop_S3A_client.).
@@ -217,7 +224,8 @@ Please also note that:
217
224
Below an example call to count words in 10 WARC records host under the `.is` top-level domain using the `--packages` option:
218
225
```
219
226
spark-submit \
220
-
--packages org.apache.hadoop:hadoop-aws:3.3.2 \
227
+
--packages org.apache.hadoop:hadoop-aws:3.3.4 \
228
+
--conf spark.sql.parquet.mergeSchema=true \
221
229
./cc_index_word_count.py \
222
230
--input_base_url s3://commoncrawl/ \
223
231
--query "SELECT url, warc_filename, warc_record_offset, warc_record_length, content_charset FROM ccindex WHERE crawl = 'CC-MAIN-2020-24' AND subset = 'warc' AND url_host_tld = 'is' LIMIT 10" \
@@ -241,7 +249,7 @@ Alternatively, it's possible configure the table schema explicitly:
241
249
- and use it by adding the command-line argument `--table_schema cc-index-schema-flat.json`.
242
250
243
251
244
-
### Using FastWARC to parse WARC files
252
+
### Using FastWARC to read WARC files
245
253
246
254
> [FastWARC](https://resiliparse.chatnoir.eu/en/latest/man/fastwarc.html) is a high-performance WARC parsing library forPython writtenin C++/Cython. The API is inspired in large parts by WARCIO, but does not aim at being a drop-in replacement.
247
255
@@ -255,6 +263,20 @@ Some differences between the warcio and FastWARC APIs are hidden from the user i
255
263
256
264
However, it's recommended that you carefully verify that your custom job implementation works in combination with FastWARC. There are subtle differences between the warcio and FastWARC APIs, including the underlying classes (WARC/HTTP headers and stream implementations). In addition, FastWARC does not support for legacy ARC files and does not automatically decode HTTP content and transfer encodings (see [Resiliparse HTTP Tools](https://resiliparse.chatnoir.eu/en/latest/man/parse/http.html#read-chunked-http-payloads)). While content and transfer encodings are already decoded in Common Crawl WARC files, this may not be the case for WARC files from other sources. See also [WARC 1.1 specification, http/https response records](https://iipc.github.io/warc-specifications/specifications/warc-format/warc-1.1/#http-and-https-schemes).
257
265
266
+
FastWARC allows to filter unwanted WARC record types at parse time, e.g., skip request records immediately not even passing them forward to the caller. To get the maximum performance from FastWARC, it's recommended to utilize the filters by setting the static class variable `fastwarc_record_filter`.
267
+
268
+
The following examples are ported to use FastWARC:
269
+
+ [count HTML tags](./html_tag_count_fastwarc.py)
270
+
+ [count web server names](./server_count_fastwarc.py)
271
+
+ list host names and corresponding [IP addresses](./server_ip_address_fastwarc.py)
272
+
+ [word count](./word_count_fastwarc.py)
273
+
274
+
In addition, the following tools are implemented using FastWARC:
0 commit comments