Elusion 🦀 DataFrame Library for Everybody!

Elusion is a high-performance DataFrame library designed for in-memory data formats such as CSV, JSON, PARQUET, DELTA, as well as for ODBC Database Connections for MySQL and PostgreSQL, as well as for Azure Blob Storage Connections, as well as HTTPS API Connections.
All of the DataFrame operations, Reading and Writing can be placed in PipelineScheduler for automated Data Engineering Pipelines.
DataFrame operations are built atop the DataFusion SQL query engine, Database operations are built atop Arrow ODBC, Azure BLOB HTTPS operations are built atop Azure Storage with BLOB and DFS (Data Lake Storage Gen2) endpoints available, Pipeline Scheduling is built atop Tokio Cron Scheduler, REST API calls are built atop Reqwest. (scroll down for examples)
Tailored for Data Engineers and Data Analysts seeking a powerful abstraction over data transformations. Elusion streamlines complex operations like filtering, joining, aggregating, and more with its intuitive, chainable DataFrame API, and provides a robust interface for managing and querying data efficiently.
Core Philosophy
Elusion wants you to be you!
Elusion offers flexibility in constructing queries without enforcing specific patterns or chaining orders, unlike SQL, PySpark, Polars, or Pandas. You can build your queries in any sequence that best fits your logic, writing functions in a manner that makes sense to you. Regardless of the order of function calls, Elusion ensures consistent results.
Platform Compatibility
Tested for MacOS, Linux and Windows

Security
Codebase has Undergone Rigorous Auditing and Security Testing, ensuring that it is fully prepared for Production.
Key Features
🔄 Job Scheduling (PipelineScheduler)
Flexible Intervals: From 1 minute to 30 days scheduling intervals. Graceful Shutdown: Built-in Ctrl+C signal handling for clean termination. Async Support: Built on tokio for non-blocking operations.
🌐 External Data Sources Integration
- Azure Blob Storage: Direct integration with Azure Blob Storage for reading data files.
- Database Connectors: ODBC support for seamless data access from MySQL and PostgreSQL databases.
- REST API Integration: Built-in support for fetching data from REST APIs with customizable Headers, Params, Pagination, Dates...
🚀 High-Performance DataFrame Operations
Seamless Data Loading: Easily load and process data from CSV, PARQUET, JSON, and DELTA table files. SQL-Like Transformations: Execute transformations such as SELECT, AGG, STRING FUNCTIONS, JOIN, FILTER, GROUP BY, and WINDOW with ease.
📉 Aggregations and Analytics
Comprehensive Aggregations: Utilize built-in functions like SUM, AVG, MEAN, MEDIAN, MIN, COUNT, MAX, and more. Advanced Scalar Math: Perform calculations using functions such as ABS, FLOOR, CEIL, SQRT, ISNAN, ISZERO, PI, POWER, and others.
🔗 Flexible Joins
Diverse Join Types: Perform joins using INNER, LEFT, RIGHT, FULL, and other join types. Intuitive Syntax: Easily specify join conditions and aliases for clarity and simplicity.
🪟 Window Functions
Analytical Capabilities: Implement window functions like RANK, DENSE_RANK, ROW_NUMBER, and custom partition-based calculations to perform advanced analytics.
🔄 Pivot and Unpivot Functions
Data Reshaping: Transform your data structure using PIVOT and UNPIVOT functions to suit your analytical needs.
📊 Plotting
You can create individual HTML files with single Plot, OR you can create HTML reports with multiple Plots: Bar, Line, Pie, Donut, Histogram, TimeSeries...
🧹 Clean Query Construction
Readable Queries: Construct SQL queries that are both readable and reusable. Advanced Query Support: Utilize Common Table Expressions (CTEs), subqueries, and set operations such as UNION, UNION ALL, INTERSECT, and EXCEPT.
🛠️ Easy-to-Use API
Chainable Interface: Build queries using a chainable and intuitive API for streamlined development. Debugging Support: Access readable debug outputs of the generated SQL for easy verification and troubleshooting. Data Preview: Quickly preview your data by displaying a subset of rows in the terminal. Composable Queries: Seamlessly chain transformations to create reusable and testable workflows.
Installation
To add Elusion to your Rust project, include the following lines in your Cargo.toml under [dependencies]:
= "1.5.1"
= { = "1.42.0", = ["rt-multi-thread"] }
Rust version needed
>= 1.81
Usage examples:
MAIN function
// Import everything needed
use *;
async
Schema
SCHEMA IS DYNAMICALLY INFERED since v0.2.5
LOADING
- Loading data into CustomDataFrame can be from:
- In-Memory data formats: CSV, JSON, PARQUET, DELTA
- Azure Blob Storage endpoints (BLOB, DFS)
- REST API endpoints
- ODBC Connectors (databases)
-> NEXT is example for reading data from local files,
down bellow are examples for Azure Blob Storage, REST APIs and ODBC
LOADING data from Files into CustomDataFrame (in-memory data formats)
- File extensions are automatically recognized
- All you have to do is to provide path to your file
let csv_data = "C:\\Borivoj\\RUST\\Elusion\\sales_data.csv";
let parquet_path = "C:\\Borivoj\\RUST\\Elusion\\prod_data.parquet";
let json_path = "C:\\Borivoj\\RUST\\Elusion\\db_data.json";
let delta_path = "C:\\Borivoj\\RUST\\Elusion\\agg_sales"; // for DELTA you just specify folder name without extension
Creating CustomDataFrame
2 arguments needed: Path, Table Alias
let df_sales = new.await?;
let df_customers = new.await?;
ALIAS column names in SELECT() function (AS is case insensitive)
let customers_alias = df_customers
.select;
Where to use which Functions:
Scalar and Operators -> in SELECT() function
Aggregation Functions -> in AGG() function
String Column Functions -> in STRING_FUNCTIONS() function
Numerical Operators (supported +, -, * , / , %)
let num_ops_sales = sales_order_df.clone
.select
.filter
.order_by
.limit;
let num_ops_res = num_ops_sales.elusion.await?;
num_ops_res.display.await?;
FILTERING
let filter_df = sales_order_df
.select
.filter_many
.order_by
.limit;
let filtered = filter_df.elusion.await?;
filtered.display.await?;
SCALAR functions
let scalar_df = sales_order_df
.select
.filter
.order_by
.limit;
let scalar_res = scalar_df.elusion.await?;
scalar_res.display.await?;
AGGREGATE functions with nested Scalar functions
let scalar_df = sales_order_df
.select
.agg
.group_by
.filter
.order_by
.limit;
let scalar_res = scalar_df.elusion.await?;
scalar_res.display.await?;
Numerical Operators, Scalar Functions, Aggregated Functions...
let mix_query = sales_order_df
.select
.agg
.filter
.group_by_all
.order_by_many;
let mix_res = mix_query.elusion.await?;
mix_res.display.await?;
Supported Aggregation functions
SUM, AVG, MEAN, MEDIAN, MIN, COUNT, MAX,
LAST_VALUE, FIRST_VALUE,
GROUPING, STRING_AGG, ARRAY_AGG, VAR, VAR_POP,
VAR_POPULATION, VAR_SAMP, VAR_SAMPLE,
BIT_AND, BIT_OR, BIT_XOR, BOOL_AND, BOOL_OR
Supported Scalar Math Functions
ABS, FLOOR, CEIL, SQRT, ISNAN, ISZERO,
PI, POW, POWER, RADIANS, RANDOM, ROUND,
FACTORIAL, ACOS, ACOSH, ASIN, ASINH,
COS, COSH, COT, DEGREES, EXP,
SIN, SINH, TAN, TANH, TRUNC, CBRT,
ATAN, ATAN2, ATANH, GCD, LCM, LN,
LOG, LOG10, LOG2, NANVL, SIGNUM
JOIN
JOIN examples with single condition and 2 dataframes, AGGREGATION, GROUP BY
let single_join = df_sales
.join
.select
.agg
.group_by
.having
.order_by // true is ascending, false is descending
.limit;
let join_df1 = single_join.elusion.await?;
join_df1.display.await?;
JOIN with single conditions and 3 dataframes, AGGREGATION, GROUP BY, HAVING, SELECT, ORDER BY
let many_joins = df_sales
.join_many
.select
.agg
.group_by
.having_many
.order_by_many
.limit;
let join_df3 = many_joins.elusion.await?;
join_df3.display.await?;
JOIN with multiple conditions and 2 data frames
let result_join = orders_df
.join
.select
.string_functions
.agg
.group_by;
let res_joins = result_join.elusion.await?;
res_joins.display.await?;
JOIN_MANY with multiple conditions and 3 data frames
let result_join_many = order_join_df
.join_many
.select
.string_functions
.agg
.group_by_all
.having
.order_by;
let res_joins_many = result_join_many.elusion.await?;
res_joins_many.display.await?;
JOIN_MANY with single condition and 3 dataframes, STRING FUNCTIONS, AGGREGATION, GROUP BY, HAVING_MANY, ORDER BY
let str_func_joins = df_sales
.join_many
.select
.string_functions
.agg
.group_by_all
.having_many
.order_by_many;
let join_str_df3 = str_func_joins.elusion.await?;
join_str_df3.display.await?;
Currently implemented join types
"INNER", "LEFT", "RIGHT", "FULL",
"LEFT SEMI", "RIGHT SEMI",
"LEFT ANTI", "RIGHT ANTI", "LEFT MARK"
STRING FUNCTIONS
let string_functions_df = df_sales
.join_many
.select
.string_functions
.agg
.filter
.group_by_all
.having
.order_by;
let str_df = string_functions_df.elusion.await?;
str_df.display.await?;
Currently Available String functions
1.Basic String Functions:
TRIM - Remove leading/trailing spaces
LTRIM - Remove leading spaces
RTRIM - Remove trailing spaces
UPPER - Convert to uppercase
LOWER - Convert to lowercase
LENGTH or LEN - Get string length
LEFT - Extract leftmost characters
RIGHT - Extract rightmost characters
SUBSTRING - Extract part of string
2. String concatenation:
CONCAT - Concatenate strings
CONCAT_WS - Concatenate with separator
3. String Position and Search:
POSITION - Find position of substring
STRPOS - Find position of substring
INSTR - Find position of substring
LOCATE - Find position of substring
4. String Replacement and Modification:
REPLACE - Replace all occurrences of substring
TRANSLATE - Replace characters
OVERLAY - Replace portion of string
REPEAT - Repeat string
REVERSE - Reverse string characters
5. String Pattern Matching:
LIKE - Pattern matching with wildcards
REGEXP or RLIKE - Pattern matching with regular expressions
6. String Padding:
LPAD - Pad string on left
RPAD - Pad string on right
SPACE - Generate spaces
7. String Case Formatting:
INITCAP - Capitalize first letter of each word
8. String Extraction:
SPLIT_PART - Split string and get nth part
SUBSTR - Get substring
9. String Type Conversion:
TO_CHAR - Convert to string
CAST - Type conversion
CONVERT - Type conversion
10. Control Flow:
CASE
WINDOW functions
Aggregate, Ranking and Analytical functions
let window_query = df_sales
.join
.select
//aggregated window functions
.window
.window
.window
.window
.window
//ranking window functions
.window
.window
.window
.window
.window
// analytical window functions
.window
.window
.window
.window
.window
let window_df = window_query.elusion.await?;
window_df.display.await?;
Rolling Window Functions
let rollin_query = df_sales
.join
.select
//aggregated rolling windows
.window
.window;
let rollin_df = rollin_query.elusion.await?;
rollin_df.display.await?;
UNION, UNION ALL, EXCEPT, INTERSECT
UNION: Combines rows from both, removing duplicates
UNION ALL: Combines rows from both, keeping duplicates
EXCEPT: Difference of two sets (only rows in left minus those in right).
INTERSECT: Intersection of two sets (only rows in both).
//UNION
let df1 = df_sales
.join
.select
.string_functions;
let df2 = df_sales
.join
.select
.string_functions;
let union_df = df1.union;
let union_df_final = union_df.elusion.await?;
union_df_final.display.await?;
//UNION ALL
let union_all_df = df1.union_all;
//EXCEPT
let except_df = df1.except;
//INTERSECT
let intersect_df = df1.intersect;
PIVOT and UNPIVOT
Pivot and Unpivot functions are ASYNC function
They should be used separately from other functions: 1. directly on initial CustomDataFrame, 2. after .elusion() evaluation.
Future needs to be in final state so .await? must be used
// PIVOT
// directly on initial CustomDataFrame
let sales_p = "C:\\Borivoj\\RUST\\Elusion\\SalesData2022.csv";
let df_sales = new.await?;
let pivoted = df_sales
.pivot.await?;
let result_pivot = pivoted.elusion.await?;
result_pivot.display.await?;
// after .elusion() evaluation
let sales_path = "C:\\Borivoj\\RUST\\Elusion\\sales_order_report.csv";
let sales_order_df = new.await?;
let scalar_df = sales_order_df
.select
.filter
.order_by
.limit;
// elusion evaluation
let scalar_res = scalar_df.elusion.await?;
let pivoted_scalar = scalar_res
.pivot.await?;
let pitvoted_scalar = pivoted_scalar.elusion.await?;
pitvoted_scalar.display.await?;
// UNPIVOT
let unpivoted = result_pivot
.unpivot.await?;
let result_unpivot = unpivoted.elusion.await?;
result_unpivot.display.await?;
// example 2
let unpivot_scalar = scalar_res
.unpivot.await?;
let result_unpivot_scalar = unpivot_scalar.elusion.await?;
result_unpivot_scalar.display.await?;
Statistical Functions
These Functions can give you quick statistical overview of your DataFrame columns and correlations
Currently available: display_stats(), display_null_analysis(), display_correlation_matrix()
df.display_stats.await?;
=== Column Statistics ===
--------------------------------------------------------------------------------
Column: abs_billable_value
------------------------------------------------------------------------------
| Metric | Value | Min | Max |
------------------------------------------------------------------------------
| Records | 10 | - | - |
| Non-null Records | 10 | - | - |
| Mean | 1025.71 | - | - |
| Standard Dev | 761.34 | - | - |
| Value Range | - | 67.4 | 2505.23 |
------------------------------------------------------------------------------
Column: sqrt_billable_value
------------------------------------------------------------------------------
| Metric | Value | Min | Max |
------------------------------------------------------------------------------
| Records | 10 | - | - |
| Non-null Records | 10 | - | - |
| Mean | 29.48 | - | - |
| Standard Dev | 13.20 | - | - |
| Value Range | - | 8.21 | 50.05 |
------------------------------------------------------------------------------
// Display null analysis
// Keep None if you want all columns to be analized
df.display_null_analysis.await?;
----------------------------------------------------------------------------------------
| Column | Total Rows | Null Count | Null Percentage |
----------------------------------------------------------------------------------------
| total_billable | 10 | 0 | 0.00% |
| order_count | 10 | 0 | 0.00% |
| customer_name | 10 | 0 | 0.00% |
| order_date | 10 | 0 | 0.00% |
| abs_billable_value | 10 | 0 | 0.00% |
----------------------------------------------------------------------------------------
// Display correlation matrix
df.display_correlation_matrix.await?;
=== Correlation Matrix ===
-------------------------------------------------------------------------------------------
| | abs_billable_va | sqrt_billable_v | double_billable | percentage_bill |
-------------------------------------------------------------------------------------------
| abs_billable_va | 1.00 | 0.98 | 1.00 | 1.00 |
| sqrt_billable_v | 0.98 | 1.00 | 0.98 | 0.98 |
| double_billable | 1.00 | 0.98 | 1.00 | 1.00 |
| percentage_bill | 1.00 | 0.98 | 1.00 | 1.00 |
-------------------------------------------------------------------------------------------
DATABASE Connectors
ODBC connectors available for MySQL and PostgreSQL
Requirements: You need to install Driver for you database ODBC connector
For ODBC connectivity on Ubuntu and macOS you need to install unixodbc:
Ubuntu/Debian: sudo apt-get install unixodbc-dev
macOS: brew install unixodbc
Windows: ODBC drivers are typically included with the OS
Don't forget that you can always load tables from Database into DataFrames and work with DataFrame API, but for better performance you should aggregate data in SQL server than push it into dataframe.
MySQL example
let connection_string = "
Driver={MySQL ODBC 9.1 Unicode Driver};\
Server=127.0.0.1;\
Port=3306;\
Database=your_database_name;\
User=your_user_name;\
Password=your_password";
let sql_query = "
SELECT
b.beer_style,
b.location,
c.color,
AVG(b.fermentation_time) AS avg_fermentation_time,
ROUND(AVG(b.temperature), 2) AS avg_temperature,
ROUND(AVG(b.quality_score), 2) AS avg_quality,
ROUND(AVG(b.brewhouse_efficiency), 2) AS avg_efficiency,
SUM(b.volume_produced) AS total_volume,
ROUND(AVG(b.loss_during_brewing), 2) AS avg_brewing_loss,
ROUND(AVG(b.loss_during_fermentation), 2) AS avg_fermentation_loss,
ROUND(SUM(b.total_sales), 2) AS total_sales,
ROUND(AVG(b.brewhouse_efficiency - (b.loss_during_brewing + b.loss_during_fermentation)), 2) AS net_efficiency
FROM brewery_data b
JOIN colors c ON b.color = c.color_number
WHERE volume_produced > 1000
GROUP BY b.beer_style, b.location, c.color
HAVING avg_quality > 8
ORDER BY total_sales DESC, avg_quality DESC
LIMIT 20
";
let mysql_df = from_db.await?;
let analysis_df = mysql_df.elusion.await?;
analysis_df.display.await?;
PostgreSQL example
let pg_connection = "\
Driver={PostgreSQL UNICODE};\
Servername=127.0.0.1;\
Port=5433;\
Database=your_database_name;\
UID=your_user_name;\
PWD=your_password;\
";
let sql_query = "
SELECT
c.name,
c.email,
SUM(s.quantity * s.price) as total_sales,
COUNT(*) as number_of_purchases
FROM sales s
JOIN customers c ON s.customer_id = c.id
GROUP BY c.id, c.name, c.email
ORDER BY total_sales DESC
";
let pg_df = from_db.await?;
let pg_res = pg_df.elusion.await?;
pg_res.display.await?;
AZURE Blob Storage Connector
Storage connector available with BLOB and DFS url endpoints, along with SAS token provided
Currently supported file types .JSON and .CSV
DFS endpoint is “Data Lake Storage Gen2” and behave more like a real file system. This makes reading operations more efficient—especially at large scale.
BLOB endpoint example
let blob_url= "https://your_storage_account_name.blob.core.windows.net/your-container-name";
let sas_token = "your_sas_token";
let df = from_azure_with_sas_token.await?;
let data_df = df.select;
let test_data = data_df.elusion.await?;
test_data.display.await?;
DFS endpoint example
let dfs_url= "https://your_storage_account_name.dfs.core.windows.net/your-container-name";
let sas_token = "your_sas_token";
let df = from_azure_with_sas_token.await?;
let data_df = df.select;
let test_data = data_df.elusion.await?;
test_data.display.await?;
REST API Connector
Available for fetching data from REST APIs with customizable Headers, Params, Pagination, Dates....
Currently supported for .JSON
REST API
// example 1
let posts_df = from_api.await?;
posts_df.display.await?;
// example 2
let users_df = from_api.await?;
users_df.display.await?;
// Dog CEO API (random dog images)
let ceo = from_api.await?;
ceo.display.await?;
REST API with Headers
// example 1
let mut headers = new;
headers.insert;
let bin_df = from_api_with_headers.await?;
bin_df.display.await?;
// example 2
let mut headers = new;
// Specify the response format (JSON in this case)
headers.insert;
// Identify your application to the API server
headers.insert;
let git_hub = from_api_with_headers.await?;
git_hub.select.display.await?;
// example 3
let mut headers = new;
headers.insert;
headers.insert;
let pokemon_df = from_api_with_headers.await?;
pokemon_df.display.await?;
REST API with Params
// Using OpenLibrary API with params
let mut params = new;
params.insert;
params.insert;
let open_lib: CustomDataFrame = from_api_with_params.await?;
open_lib.display.await?;
// Random User Generator API with params
let mut params = new;
params.insert;
params.insert;
let generator = from_api_with_params.await?;
generator.display.await?;
// JSON Placeholder with multiple endpoints
let mut params = new;
params.insert;
params.insert;
let multi = from_api_with_params.await?;
multi.display.await?;
REST API with Params and Headers
// example1 : GitHub commits with date range
let mut params = new;
params.insert;
params.insert;
let mut headers = new;
headers.insert;
headers.insert;
let commits_df = from_api_with_params_and_headers.await?;
commits_df.display.await?;
//example2: with API key, dates, and other parameters
let mut params = new;
params.insert;
params.insert;
params.insert;
params.insert;
params.insert;
let mut headers = new;
headers.insert;
headers.insert;
let nasa = from_api_with_params_and_headers.await?;
nasa.display.await?;
REST API with Pagination
// Using ReqRes API with pagination
let reqres = from_api_with_pagination.await?;
reqres.display.await?;
REST API with Dates
// Example 1: COVID-19 historical data
let posts_df = from_api_with_dates.await?;
posts_df.display.await?;
// Example 2: COVID-19 historical data
let covid_df = from_api_with_dates.await?;
covid_df.display.await?;
Common Header types
//Accept - Specifies the expected response format
rustCopyheaders.insert;
//Authorization - For authenticated requests
rustCopyheaders.insert;
//User-Agent - Identifies your application
rustCopyheaders.insert;
//Content-Type - Specifies the format of sent data
rustCopyheaders.insert;
Pipeline Scheduler
Time is set according to UTC
Currently available job frequencies
"1min","2min","5min","10min","15min","30min" ,
"1h","2h","3h","4h","5h","6h","7h","8h","9h","10h","11h","12h","24h"
"2days","3days","4days","5days","6days","7days","14days","30days"
PipelineScheduler Example (parsing data from Azure BLOB Stoarge, DataFrame operation and Writing to Parquet)
use *;
async
PLOTTING
Available Plots: Bar, Pie, Donut, Line, TimeSeries, Histogram, Box
Bellow are examples how you can simply create different plots and Report
let sales_path = "C:\\Borivoj\\RUST\\Elusion\\sales_order_report.csv";
let sales_order_df = new.await?;
let mix_df3 = sales_order_df
.select
.string_functions
.agg
.group_by_all
.filter
.limit;
let mix = mix_df3.elusion.await?;
// PLOTTING
// plot_bar()
let billable_plot = mix.plot_bar.await?;
save_plot.await?;
// plot_line()
let billable_line = mix.plot_line.await?;
save_plot.await?;
// plot_time_series()
let billable_ts = mix.plot_time_series.await?;
save_plot.await?;
// plot_histogram()
let billable_hist = mix.plot_histogram.await?;
save_plot.await?;
// plot_pie()
let billable_pie = mix.plot_pie.await?;
save_plot.await?;
// plot_donut()
let billable_donut = mix.plot_donut.await?;
save_plot.await?;
// Create report by Appening All Plots that you created
let plots = ;
create_report.await?;
Example of single Plot
Example of Report with multiple Plots
JSON files
Currently supported files can include: Arrays, Objects. Best usage if you can make it flat ("key":"value")
for JSON, all field types are infered to VARCHAR/TEXT/STRING
// example json structure
let json_path = "C:\\Borivoj\\RUST\\Elusion\\test.json";
let json_df = new.await?;
// example json structure
let json_path = "C:\\Borivoj\\RUST\\Elusion\\test2.json";
let json_df = new.await?;
WRITERS
Writing to Parquet File
We have 2 writing modes: Overwrite and Append
// overwrite existing file
result_df
.write_to_parquet
.await
.expect;
// append to exisiting file
result_df
.write_to_parquet
.await
.expect;
Writing to CSV File
CSV Writing options are mandatory
has_headers: TRUE is dynamically set for Overwrite mode, and FALSE for Append mode.
let custom_csv_options = CsvWriteOptions ;
We have 2 writing modes: Overwrite and Append
// overwrite existing file
result_df
.write_to_csv
.await
.expect;
// append to exisiting file
result_df
.write_to_csv
.await
.expect;
Writing to DELTA table / lake
We can write to delta in 2 modes Overwrite and Append
Partitioning column is OPTIONAL and if you decide to use column for partitioning, make sure that you don't need that column as you won't be able to read it back to dataframe
Once you decide to use partitioning column for writing your delta table, if you want to APPEND to it, append also need to have same column for partitioning
// Overwrite
result_df
.write_to_delta_table
.await
.expect;
// Append
result_df
.write_to_delta_table
.await
.expect;
License
Elusion is distributed under the MIT License. However, since it builds upon DataFusion, which is distributed under the Apache License 2.0, some parts of this project are subject to the terms of the Apache License 2.0. For full details, see the LICENSE.txt file.
Acknowledgments
This library leverages the power of Rust's type system and libraries like DataFusion ,Appache Arrow, Arrow ODBC... for efficient query processing. Special thanks to the open-source community for making this project possible.
Where you can find me:
LindkedIn - LinkedIn YouTube channel - YouTube Udemy Instructor - Udemy