Export Handler¶
Source code in tokensmith/export/handler.py
export_batches
¶
export_batches(batch_ids, batch_size, output_path, format_type='jsonl', return_detokenized=True, tokenizer=None, include_doc_details=False, flatten_batches=False)
Export specific batches to a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
List[int]
|
List of batch IDs to export. |
required |
|
int
|
The size of each batch. |
required |
|
str
|
Path to the output file. |
required |
|
str
|
Format to export ("jsonl" or "csv"). |
'jsonl'
|
|
bool
|
If True, exports detokenized text; otherwise exports token arrays. |
True
|
|
Optional[Any]
|
The tokenizer to use for detokenization (required if return_detokenized is True). |
None
|
|
bool
|
If True, includes document details in the export. |
False
|
|
bool
|
If True, flattens all batches into a single list of samples. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If format_type is not supported or tokenizer is None when return_detokenized is True. |
Source code in tokensmith/export/handler.py
export_sequences
¶
export_sequences(sequence_indices, output_path, format_type='jsonl', return_detokenized=True, tokenizer=None, include_doc_details=False)
Export specific sequences to a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
List[int]
|
List of sequence indices to export. |
required |
|
str
|
Path to the output file. |
required |
|
str
|
Format to export ("jsonl" or "csv"). |
'jsonl'
|
|
bool
|
If True, exports detokenized text; otherwise exports token arrays. |
True
|
|
Optional[Any]
|
The tokenizer to use for detokenization (required if return_detokenized is True). |
None
|
|
bool
|
If True, includes document details in the export. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If format_type is not supported or tokenizer is None when return_detokenized is True. |
Source code in tokensmith/export/handler.py
export_entire_dataset
¶
export_entire_dataset(output_path, format_type='jsonl', return_detokenized=True, tokenizer=None, include_doc_details=False, chunk_size=1000)
Export the entire dataset to a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
str
|
Path to the output file. |
required |
|
str
|
Format to export ("jsonl" or "csv"). |
'jsonl'
|
|
bool
|
If True, exports detokenized text; otherwise exports token arrays. |
True
|
|
Optional[Any]
|
The tokenizer to use for detokenization (required if return_detokenized is True). |
None
|
|
bool
|
If True, includes document details in the export. |
False
|
|
int
|
Number of samples to process at a time to manage memory usage. |
1000
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If format_type is not supported or tokenizer is None when return_detokenized is True. |
Source code in tokensmith/export/handler.py
export_sequence_range
¶
export_sequence_range(start_idx, end_idx, output_path, format_type='jsonl', return_detokenized=True, tokenizer=None, include_doc_details=False)
Export a range of sequences to a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
int
|
Starting sequence index (inclusive). |
required |
|
int
|
Ending sequence index (exclusive). |
required |
|
str
|
Path to the output file. |
required |
|
str
|
Format to export ("jsonl" or "csv"). |
'jsonl'
|
|
bool
|
If True, exports detokenized text; otherwise exports token arrays. |
True
|
|
Optional[Any]
|
The tokenizer to use for detokenization (required if return_detokenized is True). |
None
|
|
bool
|
If True, includes document details in the export. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If format_type is not supported, tokenizer is None when return_detokenized is True, or if start_idx >= end_idx or indices are negative. |
Source code in tokensmith/export/handler.py
export_batch_range
¶
export_batch_range(start_batch, end_batch, batch_size, output_path, format_type='jsonl', return_detokenized=True, tokenizer=None, include_doc_details=False, flatten_batches=False)
Export a range of batches to a file.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
int
|
Starting batch ID (inclusive). |
required |
|
int
|
Ending batch ID (exclusive). |
required |
|
int
|
The size of each batch. |
required |
|
str
|
Path to the output file. |
required |
|
str
|
Format to export ("jsonl" or "csv"). |
'jsonl'
|
|
bool
|
If True, exports detokenized text; otherwise exports token arrays. |
True
|
|
Optional[Any]
|
The tokenizer to use for detokenization (required if return_detokenized is True). |
None
|
|
bool
|
If True, includes document details in the export. |
False
|
|
bool
|
If True, flattens all batches into a single list of samples. |
False
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If format_type is not supported, tokenizer is None when return_detokenized is True, or if start_batch >= end_batch or batch IDs are negative. |
Source code in tokensmith/export/handler.py
export_dataset_range
¶
export_dataset_range(start_idx, end_idx, output_path, format_type='jsonl', return_detokenized=True, tokenizer=None, include_doc_details=False, chunk_size=1000)
Export a range of the dataset to a file with memory-efficient chunking.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
|
int
|
Starting sequence index (inclusive). |
required |
|
int
|
Ending sequence index (exclusive). |
required |
|
str
|
Path to the output file. |
required |
|
str
|
Format to export ("jsonl" or "csv"). |
'jsonl'
|
|
bool
|
If True, exports detokenized text; otherwise exports token arrays. |
True
|
|
Optional[Any]
|
The tokenizer to use for detokenization (required if return_detokenized is True). |
None
|
|
bool
|
If True, includes document details in the export. |
False
|
|
int
|
Number of samples to process at a time to manage memory usage. |
1000
|
Raises:
| Type | Description |
|---|---|
ValueError
|
If format_type is not supported, tokenizer is None when return_detokenized is True, or if start_idx >= end_idx or indices are negative. |
Source code in tokensmith/export/handler.py
353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 | |