Publish shard CC-MAIN-2026-12/00000
Browse files
LICENSE
CHANGED
|
@@ -1,11 +1,15 @@
|
|
| 1 |
Open Data Commons Attribution License (ODC-By) v1.0
|
| 2 |
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
Open Data Commons Attribution License (ODC-By) v1.0
|
| 2 |
|
| 3 |
+
Full text: https://opendatacommons.org/licenses/by/1-0/
|
| 4 |
+
|
| 5 |
+
You are free to copy, distribute, use, modify, transform, and build upon
|
| 6 |
+
this database, as long as you attribute the source.
|
| 7 |
+
|
| 8 |
+
Attribution: "OpenHTML, derived from Common Crawl (https://commoncrawl.org)"
|
| 9 |
+
|
| 10 |
+
Note: This dataset contains data derived from Common Crawl, which archives
|
| 11 |
+
third-party web content. The original content remains subject to the rights
|
| 12 |
+
of its respective publishers. You are responsible for complying with applicable
|
| 13 |
+
law including downstream licensing obligations, robots.txt restrictions, privacy
|
| 14 |
+
requirements, and content removal requests. See Common Crawl's Terms of Use:
|
| 15 |
+
https://commoncrawl.org/terms-of-use
|
README.md
CHANGED
|
@@ -1,99 +1,279 @@
|
|
| 1 |
---
|
| 2 |
license: odc-by
|
| 3 |
task_categories:
|
| 4 |
-
|
| 5 |
-
|
|
|
|
| 6 |
language:
|
| 7 |
-
|
|
|
|
|
|
|
| 8 |
size_categories:
|
| 9 |
-
|
| 10 |
tags:
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
-
# OpenHTML
|
| 17 |
|
| 18 |
-
Raw HTML
|
| 19 |
-
WARC records, HTTP response headers, and HTML `<head>` tags.
|
| 20 |
|
| 21 |
-
##
|
| 22 |
|
| 23 |
-
|
| 24 |
-
|--------|-------|
|
| 25 |
-
| Crawl | CC-MAIN-2026-12 |
|
| 26 |
-
| Shards | 1 |
|
| 27 |
-
| Documents | 19,785 |
|
| 28 |
-
| Raw HTML | 2.2 GB |
|
| 29 |
-
| Body (parquet) | 2.2 GB |
|
| 30 |
-
| Parquet (zstd) | 476.6 MB |
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
|
| 35 |
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
| `title` | string | Page `<title>` |
|
| 54 |
-
| `description` | string | `<meta name="description">` |
|
| 55 |
-
| `og_title` | string | `og:title` Open Graph tag |
|
| 56 |
-
| `og_description` | string | `og:description` Open Graph tag |
|
| 57 |
-
| `og_image` | string | `og:image` Open Graph tag |
|
| 58 |
-
| `og_type` | string | `og:type` Open Graph tag |
|
| 59 |
-
| `canonical_url` | string | `<link rel="canonical">` href |
|
| 60 |
-
| `html_length` | int64 | Raw HTML byte count |
|
| 61 |
-
| `body` | string | HTML body (truncated at 256 KB) |
|
| 62 |
-
|
| 63 |
-
## Usage
|
| 64 |
|
| 65 |
```python
|
| 66 |
from datasets import load_dataset
|
| 67 |
|
| 68 |
-
#
|
| 69 |
-
ds = load_dataset("open-index/open-html",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 70 |
|
| 71 |
-
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
|
| 74 |
-
|
| 75 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
-
|
| 78 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
```
|
| 80 |
|
| 81 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
-
|
| 84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
### Processing Times
|
| 87 |
|
| 88 |
Pipeline timings across 1 shards of CC-MAIN-2026-12:
|
| 89 |
|
| 90 |
```
|
| 91 |
-
Download (raw WARC)
|
| 92 |
-
Extract (WARC → HTML
|
| 93 |
-
|
| 94 |
-
Publish (HuggingFace upload) ░░░░░░░░░░░░░░░░░░░░░░░░ —
|
| 95 |
```
|
| 96 |
|
| 97 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
-
|
|
|
|
| 1 |
---
|
| 2 |
license: odc-by
|
| 3 |
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
- feature-extraction
|
| 6 |
+
- text-classification
|
| 7 |
language:
|
| 8 |
+
- en
|
| 9 |
+
- mul
|
| 10 |
+
pretty_name: OpenHTML
|
| 11 |
size_categories:
|
| 12 |
+
- 1M<n<10M
|
| 13 |
tags:
|
| 14 |
+
- common-crawl
|
| 15 |
+
- web-crawl
|
| 16 |
+
- html
|
| 17 |
+
- text
|
| 18 |
+
- metadata
|
| 19 |
+
configs:
|
| 20 |
+
- config_name: default
|
| 21 |
+
data_files:
|
| 22 |
+
- split: train
|
| 23 |
+
path: data/*/*
|
| 24 |
+
- config_name: CC-MAIN-2026-12
|
| 25 |
+
data_files:
|
| 26 |
+
- split: train
|
| 27 |
+
path: data/CC-MAIN-2026-12/*
|
| 28 |
---
|
| 29 |
|
| 30 |
+
# **OpenHTML**
|
| 31 |
|
| 32 |
+
> Raw HTML from the web with rich structured metadata — ready for training, retrieval, and analysis
|
|
|
|
| 33 |
|
| 34 |
+
## What is it?
|
| 35 |
|
| 36 |
+
**OpenHTML** is a large-scale web dataset built from [Common Crawl](https://commoncrawl.org). Common Crawl is a non-profit that crawls the web and freely provides its archives and datasets to the public — see [their latest crawl announcement](https://commoncrawl.org/blog/march-2026-crawl-archive-now-available) for details on the source data. Every page goes through a pipeline that extracts the raw HTML body along with structured metadata from WARC records, HTTP response headers, and HTML `<head>` tags, then packages everything into Parquet files with 24 columns.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
The dataset currently includes crawl **CC-MAIN-2026-12** with **19,785 documents across 1 shards**. Processed 2.2 GB of raw HTML into 2.2 GB of stored body text — 476.6 MB as Parquet (Zstd). We plan to add more snapshots over time.
|
| 39 |
|
| 40 |
+
**OpenHTML** is released under the **Open Data Commons Attribution License (ODC-By) v1.0**, the same license used by Common Crawl.
|
| 41 |
|
| 42 |
+
## What is being released?
|
| 43 |
+
|
| 44 |
+
Each Common Crawl WARC file (~1 GB of compressed HTML) becomes one Parquet shard. The shards live under a crawl-specific directory so multiple snapshots can coexist:
|
| 45 |
+
|
| 46 |
+
```
|
| 47 |
+
data/
|
| 48 |
+
CC-MAIN-2026-12/
|
| 49 |
+
00000.parquet
|
| 50 |
+
00001.parquet
|
| 51 |
+
...
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
Every row in a Parquet file is one web page with **24 columns** of metadata. Each row includes the `warc_record_id` and `warc_date` fields parsed from the original WARC headers, so you can trace any document back to its source record. We also extract HTTP response headers (`content_type`, `charset`, `content_language`, `http_server`, `http_last_modified`) and HTML `<head>` metadata (`title`, `description`, `og:title`, `og:description`, `og:image`, `og:type`, `canonical_url`, `html_lang`). The URL is decomposed into `host`, `domain` (eTLD+1), `path`, and `query`.
|
| 55 |
+
|
| 56 |
+
## How to download and use OpenHTML
|
| 57 |
+
|
| 58 |
+
### Using `datasets`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
```python
|
| 61 |
from datasets import load_dataset
|
| 62 |
|
| 63 |
+
# stream the entire dataset
|
| 64 |
+
ds = load_dataset("open-index/open-html", name="CC-MAIN-2026-12", split="train", streaming=True)
|
| 65 |
+
for doc in ds:
|
| 66 |
+
print(doc["url"], doc["title"], len(doc["body"]))
|
| 67 |
+
|
| 68 |
+
# load a single shard into memory
|
| 69 |
+
ds = load_dataset(
|
| 70 |
+
"open-index/open-html",
|
| 71 |
+
data_files="data/CC-MAIN-2026-12/00000.parquet",
|
| 72 |
+
split="train",
|
| 73 |
+
)
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
### Using `huggingface_hub`
|
| 77 |
+
|
| 78 |
+
```python
|
| 79 |
+
from huggingface_hub import snapshot_download
|
| 80 |
+
|
| 81 |
+
folder = snapshot_download(
|
| 82 |
+
"open-index/open-html",
|
| 83 |
+
repo_type="dataset",
|
| 84 |
+
local_dir="./open-html/",
|
| 85 |
+
allow_patterns="data/CC-MAIN-2026-12/*",
|
| 86 |
+
)
|
| 87 |
+
```
|
| 88 |
+
|
| 89 |
+
For faster downloads, install `pip install huggingface_hub[hf_transfer]` and set `HF_HUB_ENABLE_HF_TRANSFER=1`.
|
| 90 |
+
|
| 91 |
+
### Using DuckDB
|
| 92 |
|
| 93 |
+
```sql
|
| 94 |
+
SELECT url, title, domain, html_lang, html_length
|
| 95 |
+
FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet')
|
| 96 |
+
WHERE domain = 'wikipedia.org'
|
| 97 |
+
LIMIT 10;
|
| 98 |
+
```
|
| 99 |
|
| 100 |
+
```sql
|
| 101 |
+
-- Top domains by page count
|
| 102 |
+
SELECT domain, COUNT(*) as pages, AVG(html_length) as avg_html
|
| 103 |
+
FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet')
|
| 104 |
+
GROUP BY domain
|
| 105 |
+
ORDER BY pages DESC
|
| 106 |
+
LIMIT 20;
|
| 107 |
+
```
|
| 108 |
|
| 109 |
+
```sql
|
| 110 |
+
-- Pages with Open Graph metadata
|
| 111 |
+
SELECT url, og_title, og_description, og_image
|
| 112 |
+
FROM read_parquet('hf://datasets/open-index/open-html/data/CC-MAIN-2026-12/*.parquet')
|
| 113 |
+
WHERE og_title != '' AND og_image != ''
|
| 114 |
+
LIMIT 10;
|
| 115 |
```
|
| 116 |
|
| 117 |
+
# Dataset card for OpenHTML
|
| 118 |
+
|
| 119 |
+
## Dataset Description
|
| 120 |
+
|
| 121 |
+
- **Homepage and Repository:** [https://huggingface.co/datasets/open-index/open-html](https://huggingface.co/datasets/open-index/open-html)
|
| 122 |
+
- **Point of Contact:** please create a discussion on the Community tab
|
| 123 |
+
- **License:** Open Data Commons Attribution License (ODC-By) v1.0
|
| 124 |
+
|
| 125 |
+
## Dataset Structure
|
| 126 |
+
|
| 127 |
+
### Data Instance
|
| 128 |
+
|
| 129 |
+
The following is an example row from the dataset:
|
| 130 |
+
|
| 131 |
+
```json
|
| 132 |
+
{
|
| 133 |
+
"url": "https://example.com/article/interesting-topic",
|
| 134 |
+
"warc_date": "2026-03-05T07:14:58Z",
|
| 135 |
+
"warc_record_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
|
| 136 |
+
"warc_filename": "CC-MAIN-20260305070756-20260305100756-00000.warc.gz",
|
| 137 |
+
"http_status": 200,
|
| 138 |
+
"content_type": "text/html",
|
| 139 |
+
"charset": "utf-8",
|
| 140 |
+
"content_language": "en",
|
| 141 |
+
"http_server": "nginx",
|
| 142 |
+
"http_last_modified": "Tue, 04 Mar 2026 12:00:00 GMT",
|
| 143 |
+
"host": "example.com",
|
| 144 |
+
"domain": "example.com",
|
| 145 |
+
"path": "/article/interesting-topic",
|
| 146 |
+
"query": "",
|
| 147 |
+
"html_lang": "en",
|
| 148 |
+
"title": "Interesting Topic - Example",
|
| 149 |
+
"description": "A fascinating article about interesting topics.",
|
| 150 |
+
"og_title": "Interesting Topic",
|
| 151 |
+
"og_description": "A fascinating article about interesting topics.",
|
| 152 |
+
"og_image": "https://example.com/images/topic.jpg",
|
| 153 |
+
"og_type": "article",
|
| 154 |
+
"canonical_url": "https://example.com/article/interesting-topic",
|
| 155 |
+
"html_length": 48210,
|
| 156 |
+
"body": "<!DOCTYPE html><html lang=\"en\"><head>..."
|
| 157 |
+
}
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
### Data Fields
|
| 161 |
+
|
| 162 |
+
| Column | Type | Description |
|
| 163 |
+
|---|---|---|
|
| 164 |
+
| `url` | string | Full URL of the crawled page |
|
| 165 |
+
| `warc_date` | string | Crawl timestamp from the WARC record (RFC 3339) |
|
| 166 |
+
| `warc_record_id` | string | UUID from the WARC-Record-ID header, for source traceability |
|
| 167 |
+
| `warc_filename` | string | Source WARC file basename from Common Crawl |
|
| 168 |
+
| `http_status` | int32 | HTTP response status code (always 200 in this dataset) |
|
| 169 |
+
| `content_type` | string | Content-Type from the HTTP response (always starts with `text/html`) |
|
| 170 |
+
| `charset` | string | Character encoding from the Content-Type header (e.g., `utf-8`, `iso-8859-1`) |
|
| 171 |
+
| `content_language` | string | Content-Language HTTP header (e.g., `en`, `de`, `fr`) |
|
| 172 |
+
| `http_server` | string | Server software from the HTTP response (e.g., `nginx`, `Apache`) |
|
| 173 |
+
| `http_last_modified` | string | Last-Modified HTTP header — when the page was last changed |
|
| 174 |
+
| `host` | string | Lowercase hostname extracted from the URL (e.g., `www.example.com`) |
|
| 175 |
+
| `domain` | string | Registered domain (eTLD+1) — groups subdomains together (e.g., `example.com`) |
|
| 176 |
+
| `path` | string | URL path component (e.g., `/article/interesting-topic`) |
|
| 177 |
+
| `query` | string | URL query string, if any (e.g., `page=2&sort=date`) |
|
| 178 |
+
| `html_lang` | string | Language attribute from `<html lang="...">` tag |
|
| 179 |
+
| `title` | string | Page title from `<title>` tag in `<head>` |
|
| 180 |
+
| `description` | string | Meta description from `<meta name="description">` |
|
| 181 |
+
| `og_title` | string | Open Graph title from `<meta property="og:title">` |
|
| 182 |
+
| `og_description` | string | Open Graph description from `<meta property="og:description">` |
|
| 183 |
+
| `og_image` | string | Open Graph image URL from `<meta property="og:image">` |
|
| 184 |
+
| `og_type` | string | Open Graph type from `<meta property="og:type">` (e.g., `article`, `website`) |
|
| 185 |
+
| `canonical_url` | string | Canonical URL from `<link rel="canonical">` — the page's preferred URL |
|
| 186 |
+
| `html_length` | int64 | Byte length of the original HTML body before truncation |
|
| 187 |
+
| `body` | string | Raw HTML body, truncated at 256 KB per record |
|
| 188 |
+
|
| 189 |
+
### Data Splits
|
| 190 |
|
| 191 |
+
The default subset includes all available data across all crawl snapshots. You can also load a specific crawl by using its ID as the config name (e.g. `CC-MAIN-2026-12`).
|
| 192 |
+
|
| 193 |
+
## Dataset Creation
|
| 194 |
+
|
| 195 |
+
### Curation Rationale
|
| 196 |
+
|
| 197 |
+
Most open web datasets either release raw text (losing structure) or processed markdown (losing metadata). **OpenHTML** takes a different approach: it preserves the **raw HTML** alongside **24 columns of structured metadata** extracted from WARC headers, HTTP response headers, and HTML `<head>` tags. This lets you:
|
| 198 |
+
|
| 199 |
+
- **Train** models on raw web content with full context
|
| 200 |
+
- **Filter** by language, domain, content type, or Open Graph metadata
|
| 201 |
+
- **Analyze** web structure, server software distribution, or charset usage
|
| 202 |
+
- **Trace** every document back to its exact WARC source record
|
| 203 |
+
|
| 204 |
+
### Source Data
|
| 205 |
+
|
| 206 |
+
The source data consists of web pages crawled by the [Common Crawl](https://commoncrawl.org) foundation. Common Crawl archives billions of pages across the public web and makes the raw WARC files freely available on Amazon S3.
|
| 207 |
+
|
| 208 |
+
### Data Processing Steps
|
| 209 |
+
|
| 210 |
+
The processing pipeline runs as a single-pass extraction:
|
| 211 |
+
|
| 212 |
+
1. **Download** raw .warc.gz files from Common Crawl S3 (each file is roughly 1 GB compressed)
|
| 213 |
+
2. **Filter** to keep only HTTP 200 responses with a `text/html` content type, discarding images, scripts, redirects, and error pages
|
| 214 |
+
3. **Parse** HTTP response headers to extract `content_type`, `charset`, `content_language`, `server`, and `last_modified`
|
| 215 |
+
4. **Decompose** the URL into `host`, `domain` (eTLD+1 via the Public Suffix List), `path`, and `query`
|
| 216 |
+
5. **Extract** HTML `<head>` metadata using a streaming tokenizer: `title`, `description`, Open Graph tags (`og:title`, `og:description`, `og:image`, `og:type`), `canonical_url`, and `html_lang`
|
| 217 |
+
6. **Truncate** the HTML body at 256 KB (the full `html_length` is preserved for reference)
|
| 218 |
+
7. **Export** directly to Apache Parquet with Zstd compression, 100,000 rows per row group
|
| 219 |
+
|
| 220 |
+
No intermediate files are created — the pipeline streams from compressed WARC through extraction directly into Parquet. Pages that produce empty HTML bodies are dropped.
|
| 221 |
+
|
| 222 |
+
### Compression Ratios
|
| 223 |
+
|
| 224 |
+
Numbers below are actual measurements summed across all 1 files of CC-MAIN-2026-12 (19,785 pages total), projected to the full crawl of 100,000 WARC files.
|
| 225 |
+
|
| 226 |
+
| Stage | 1 files (measured) | 100,000 files (projected) | Reduction |
|
| 227 |
+
|---|---|---|---|
|
| 228 |
+
| Raw WARC (.warc.gz, downloaded) | ~830.0 MB | ~79.2 TB | — |
|
| 229 |
+
| HTML extracted (uncompressed) | 2.2 GB | ~216.3 TB | — |
|
| 230 |
+
| Body stored (truncated at 256 KB) | 2.2 GB | ~214.2 TB | **-1.0%** vs HTML |
|
| 231 |
+
| Final Parquet (Zstd) | 476.6 MB | ~45.5 TB | **-78.8%** vs body |
|
| 232 |
+
|
| 233 |
+
The body column stores the raw HTML truncated at 256 KB. Parquet with Zstd then compresses the data further. End to end: ~830.0 MB of raw gzipped WARCs becomes **476.6 MB of Parquet** — a **42.6% total reduction** — containing 19,785 web pages with full metadata.
|
| 234 |
|
| 235 |
### Processing Times
|
| 236 |
|
| 237 |
Pipeline timings across 1 shards of CC-MAIN-2026-12:
|
| 238 |
|
| 239 |
```
|
| 240 |
+
Download (raw WARC) ░░░░░░░░░░░░░░░░░░░░░░░░ —
|
| 241 |
+
Extract (WARC → HTML + metadata) ████████████████████████ 2m 10s
|
| 242 |
+
Publish (HuggingFace upload) ░░░░░░░░░░░░░░░░░░░░░░░░ —
|
|
|
|
| 243 |
```
|
| 244 |
|
| 245 |
+
### Dataset Charts
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+
### Personal and Sensitive Information
|
| 252 |
+
|
| 253 |
+
No additional PII filtering is applied beyond what Common Crawl provides. As the dataset is sourced from the public web, it is likely that some personally identifiable information is present. If you find your own PII in the dataset and would like it removed, please open an issue on the repository.
|
| 254 |
+
|
| 255 |
+
## Considerations for Using the Data
|
| 256 |
+
|
| 257 |
+
### Social Impact
|
| 258 |
+
|
| 259 |
+
By releasing both the dataset and the full processing pipeline, we aim to lower the barrier to training and evaluating language models on high quality web data. Researchers and practitioners who cannot afford to run their own Common Crawl processing pipelines can use **OpenHTML** directly.
|
| 260 |
+
|
| 261 |
+
### Discussion of Biases
|
| 262 |
+
|
| 263 |
+
**OpenHTML** inherits the biases present in Common Crawl and the public web at large. The filtering step keeps only `text/html` pages, which may underrepresent content served as other content types. We have not applied any machine-learning-based quality or toxicity filters, as such filters have been shown to disproportionately remove content from certain dialects and communities.
|
| 264 |
+
|
| 265 |
+
### Known Limitations
|
| 266 |
+
|
| 267 |
+
The HTML body is truncated at 256 KB per record. Very long pages (e.g., full-text articles with inline images as data URIs) may be incomplete. The `html_length` field always reflects the true size. If you need complete HTML for specific pages, use the `warc_record_id` and `warc_filename` to retrieve the original from Common Crawl.
|
| 268 |
+
|
| 269 |
+
Metadata extraction scans only the `<head>` section for performance. Pages that place `<meta>` or `<title>` tags in the `<body>` will have missing metadata.
|
| 270 |
+
|
| 271 |
+
## Additional Information
|
| 272 |
+
|
| 273 |
+
### Licensing
|
| 274 |
+
|
| 275 |
+
The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0**. The use of this dataset is also subject to [Common Crawl's Terms of Use](https://commoncrawl.org/terms-of-use). The original content remains subject to the rights and terms of its respective publishers.
|
| 276 |
+
|
| 277 |
+
### Contact
|
| 278 |
|
| 279 |
+
Please open a discussion on the [Community tab](https://huggingface.co/datasets/open-index/open-html/discussions) for questions, feedback, or issues.
|
stats.csv
CHANGED
|
@@ -1,2 +1,2 @@
|
|
| 1 |
crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
|
| 2 |
-
CC-MAIN-2026-12,0,19785,2378307194,2355180440,2355180440,499802522,2026-03-24T03:
|
|
|
|
| 1 |
crawl_id,file_idx,rows,html_bytes,body_bytes,md_bytes,parquet_bytes,created_at,dur_download_s,dur_extract_s,dur_export_s,dur_publish_s,peak_rss_mb
|
| 2 |
+
CC-MAIN-2026-12,0,19785,2378307194,2355180440,2355180440,499802522,2026-03-24T03:56:29Z,0,123,7,0,830
|