1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
//! # Carbon Core
//!
//! `carbon-core` is a framework designed for building customizable and
//! extensible indexers tailored to Solana blockchain data. It facilitates
//! efficient data ingestion, transformation, and processing, supporting a wide
//! range of use cases from transaction parsing to complex instruction analysis.
//! This crate includes modular components that enable users to process
//! blockchain data flexibly and with ease.
//!
//! The true power of this framework lies when utilizing all it's components in
//! combination with one another.
//!
//! ## Modules Overview
//!
//! - **[`account`]**: Manages account data processing, including decoding and
//! updates. Account data is processed through pipes that support custom
//! decoders and processors.
//!
//! - **[`account_deletion`]**: Handles the deletion of accounts and processes
//! these events in the pipeline.
//!
//! - **[`collection`]**: Defines collections for instruction decoding, allowing
//! for customized instruction parsers that handle specific instruction sets.
//!
//! - **[`datasource`]**: Provides data ingestion capabilities, enabling the
//! integration of external data sources into the pipeline. Supports
//! Solana-specific data structures.
//!
//! - **[`deserialize`]**: Contains utilities for data deserialization,
//! including helper functions for parsing Solana transactions and other
//! binary data formats.
//!
//! - **[`error`]**: Defines error types used throughout the crate, providing
//! consistent error handling for the framework.
//!
//! - **[`filter`]**: Provides a flexible filtering system that allows selective
//! processing of updates based on various criteria such as datasource ID,
//! update content, or custom logic. Filters can be applied to different
//! types of updates (accounts, instructions, transactions, account deletions,
//! and block details) to control which updates are processed by specific pipes.
//!
//! - **[`instruction`]**: Supports instruction parsing and processing within
//! transactions. This module includes structures and traits for decoding and
//! handling transaction instructions.
//!
//! - **[`metrics`]**: Facilitates performance monitoring and metric recording
//! within the pipeline. Metrics can be customized and are recorded at each
//! processing stage for monitoring and debugging purposes.
//!
//! - **[`pipeline`]**: Represents the core of the framework, defining the main
//! pipeline structure that manages data flow and processing. The pipeline
//! integrates data sources, processing pipes, and metrics to provide a
//! complete data processing solution.
//!
//! - **[`postgres`]**: Provides support for PostgreSQL database operations,
//! including table definitions, insert, upsert, and delete operations.
//! This module is designed to be used in conjunction with the `sqlx` crate
//! for database interactions.
//!
//! - **[`processor`]**: Contains traits and implementations for processing data
//! in the pipeline. This module allows for the creation of custom data
//! processors that can be integrated into various stages of the pipeline.
//!
//! - **[`schema`]**: Defines transaction schemas, allowing for structured
//! parsing and validation of transaction data based on specified rules.
//! Supports complex nested instruction matching for comprehensive transaction
//! analysis.
//!
//! - **[`transaction`]**: Manages transaction data, including metadata
//! extraction and parsing. This module supports transaction validation and
//! processing, enabling detailed transaction insights.
//!
//! - **[`transformers`]**: Provides utility functions for transforming and
//! restructuring data. This module includes functions for converting Solana
//! transaction data into formats suitable for processing within the
//! framework.
//!
//! ## Quick Start
//!
//! To create a new `carbon-core` pipeline, start by configuring data sources,
//! processing pipes, and metrics in the [`pipeline::PipelineBuilder`]. Below is
//! a basic example demonstrating how to set up a pipeline:
//!
//! ```ignore
//! use std::sync::Arc;
//!
//! carbon_core::pipeline::Pipeline::builder()
//! .datasource(transaction_crawler)
//! .metrics(Arc::new(LogMetrics::new()))
//! .metrics(Arc::new(PrometheusMetrics::new()))
//! .instruction(
//! TestProgramDecoder,
//! TestProgramProcessor
//! )
//! .account(
//! TestProgramDecoder,
//! TestProgramAccountProcessor
//! )
//! .transaction(TEST_SCHEMA.clone(), TestProgramTransactionProcessor)
//! .account_deletions(TestProgramAccountDeletionProcessor)
//! .build()?
//! .run()
//! .await?;
//! ```
//!
//! ## Crate Features
//!
//! - **Modular Design**: Components can be easily added or replaced, allowing
//! for a high degree of customization.
//! - **Concurrency Support**: Built with asynchronous Rust, enabling efficient
//! data processing in parallel.
//! - **Solana-Specific**: Tailored to handle Solana blockchain data structures,
//! making it ideal for blockchain data analysis and transaction processing.
//!
//! ## Notes
//!
//! - `carbon-core` integrates with Solana's SDK, leveraging types and data
//! structures specific to the Solana blockchain.
//! - This framework is designed for advanced use cases, such as blockchain
//! indexing, transaction monitoring, and custom data analysis.
//!
//! Explore each module in detail to understand their individual functions and
//! to learn how to customize and extend `carbon-core` to suit your specific
//! data processing requirements.
pub use borsh;
pub use *;
pub use *;
pub use log;