Expand description
Language training and inference adapters over the shared Dragon BDH core.
Paper mapping:
burn_dragon_core::BDHowns the paper-faithfulx_neuron,y_gate,y_neuron, and per-layer recurrentrhocontract- this crate layers tokenization, datasets, generation, and training schedules on top of that core without redefining the recurrent state semantics
Re-exports§
pub use bitnet_artifact::BITNET_ARTIFACT_BINARY_MAGIC;pub use bitnet_artifact::LanguageBitNetArtifactBundle;pub use bitnet_artifact::deserialize_bitnet_artifact_binary;pub use bitnet_artifact::serialize_bitnet_artifact_binary;pub use config::ContextStrategyConfig;pub use config::GenerationConfig;pub use config::GenerationOutputFormat;pub use config::GenerationTokenizerSourceConfig;pub use config::ModelOverrides;pub use generation::ContextStrategy;pub use generation::GenerationProfileSnapshot;pub use generation::GenerationSettings;pub use generation::generate_text;pub use generation::generate_tokens;pub use generation::generate_tokens_chunked;pub use generation::generation_profile_reset;pub use generation::generation_profile_snapshot;pub use generation::prefill_state;pub use generation::resolve_context_strategy;pub use generation::sample_next_token;pub use inference::WgpuFusedCoreOverride;pub use inference::apply_wgpu_fused_core_override;pub use inference::build_model_config;pub use inference::build_model_config_with_tokenizer;pub use inference::is_wgpu_backend_name;pub use loss::language_model_loss;pub use summary_events::resolve_summary_memory_write_triggers;pub use summary_events::summary_event_mask_from_flat_batch;pub use summary_events::summary_event_mask_from_tokens;pub use summary_events::summary_event_mask_tensor;pub use tokenizer::char_vocab::CharVocab;
Modules§
- api
- Curated language-facing Dragon API.
- bitnet_
artifact - config
- generation
- inference
- loss
- summary_
events - tokenizer