{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 🚀 TrustformeRS Interactive Tutorial\n",
"\n",
"Welcome to the comprehensive TrustformeRS tutorial! This notebook will guide you through all the major features of TrustformeRS, the high-performance machine learning library written in Rust.\n",
"\n",
"## 📚 What You'll Learn\n",
"\n",
"1. **Basic Pipeline Usage** - Text classification, generation, and Q&A\n",
"2. **Advanced Features** - Batching, streaming, and optimization\n",
"3. **Model Management** - Loading, configuring, and comparing models\n",
"4. **Performance Optimization** - JIT compilation, caching, and profiling\n",
"5. **Production Features** - Serving, monitoring, and deployment\n",
"\n",
"## 🛠️ Setup\n",
"\n",
"First, let's install TrustformeRS and set up our environment:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Install TrustformeRS (in a real environment)\n",
"# !pip install trustformers\n",
"\n",
"# For this tutorial, we'll use mock implementations\n",
"import json\n",
"import time\n",
"import random\n",
"from typing import Dict, List, Any, Optional\n",
"from dataclasses import dataclass\n",
"import matplotlib.pyplot as plt\n",
"import numpy as np\n",
"\n",
"print(\"✅ TrustformeRS environment ready!\")\n",
"print(\"📖 This tutorial uses simulated TrustformeRS functionality for demonstration\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🎯 Chapter 1: Basic Pipeline Usage\n",
"\n",
"Let's start with the fundamental building blocks of TrustformeRS - pipelines!"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Mock TrustformeRS classes for tutorial purposes\n",
"@dataclass\n",
"class ClassificationResult:\n",
" label: str\n",
" score: float\n",
" all_scores: List[Dict[str, Any]]\n",
"\n",
"@dataclass\n",
"class GenerationResult:\n",
" generated_text: str\n",
" prompt: str\n",
" \n",
"@dataclass\n",
"class QAResult:\n",
" answer: str\n",
" confidence: float\n",
" context: str\n",
" question: str\n",
"\n",
"class MockTextClassificationPipeline:\n",
" def __init__(self, model_name: str):\n",
" self.model_name = model_name\n",
" print(f\"🤖 Loading {model_name}...\")\n",
" time.sleep(0.5) # Simulate loading\n",
" print(\"✅ Model loaded successfully!\")\n",
" \n",
" def __call__(self, text: str) -> ClassificationResult:\n",
" # Simple sentiment analysis simulation\n",
" positive_words = ['good', 'great', 'excellent', 'amazing', 'wonderful', 'love', 'fantastic']\n",
" negative_words = ['bad', 'terrible', 'awful', 'hate', 'horrible', 'worst']\n",
" \n",
" text_lower = text.lower()\n",
" positive_count = sum(1 for word in positive_words if word in text_lower)\n",
" negative_count = sum(1 for word in negative_words if word in text_lower)\n",
" \n",
" if positive_count > negative_count:\n",
" label, score = \"POSITIVE\", 0.85 + min(positive_count * 0.1, 0.15)\n",
" elif negative_count > positive_count:\n",
" label, score = \"NEGATIVE\", 0.85 + min(negative_count * 0.1, 0.15)\n",
" else:\n",
" label, score = \"NEUTRAL\", 0.55\n",
" \n",
" return ClassificationResult(\n",
" label=label,\n",
" score=score,\n",
" all_scores=[\n",
" {\"label\": \"POSITIVE\", \"score\": score if label == \"POSITIVE\" else 1.0 - score},\n",
" {\"label\": \"NEGATIVE\", \"score\": score if label == \"NEGATIVE\" else 1.0 - score},\n",
" ]\n",
" )\n",
" \n",
" def batch(self, texts: List[str]) -> List[ClassificationResult]:\n",
" print(f\"🔄 Processing batch of {len(texts)} texts...\")\n",
" return [self(text) for text in texts]\n",
"\n",
"class MockTextGenerationPipeline:\n",
" def __init__(self, model_name: str):\n",
" self.model_name = model_name\n",
" print(f\"🤖 Loading {model_name}...\")\n",
" time.sleep(0.5)\n",
" print(\"✅ Model loaded successfully!\")\n",
" \n",
" def __call__(self, prompt: str, max_length: int = 50) -> GenerationResult:\n",
" continuations = [\n",
" \"is an incredible advancement in machine learning technology.\",\n",
" \"represents the future of artificial intelligence and natural language processing.\",\n",
" \"demonstrates the power of modern deep learning architectures.\",\n",
" \"shows how transformer models can understand and generate human-like text.\"\n",
" ]\n",
" \n",
" continuation = continuations[len(prompt) % len(continuations)]\n",
" generated = f\"{prompt} {continuation}\"\n",
" \n",
" # Simulate word limit\n",
" words = generated.split()\n",
" if len(words) > max_length:\n",
" generated = \" \".join(words[:max_length]) + \"...\"\n",
" \n",
" return GenerationResult(generated_text=generated, prompt=prompt)\n",
"\n",
"def pipeline(task: str, model: str = None):\n",
" \"\"\"Factory function to create pipelines\"\"\"\n",
" default_models = {\n",
" \"text-classification\": \"distilbert-base-uncased-finetuned-sst-2-english\",\n",
" \"text-generation\": \"gpt2\",\n",
" \"question-answering\": \"distilbert-base-cased-distilled-squad\"\n",
" }\n",
" \n",
" model_name = model or default_models.get(task, \"default-model\")\n",
" \n",
" if task == \"text-classification\":\n",
" return MockTextClassificationPipeline(model_name)\n",
" elif task == \"text-generation\":\n",
" return MockTextGenerationPipeline(model_name)\n",
" else:\n",
" raise ValueError(f\"Task {task} not supported in this tutorial\")\n",
"\n",
"print(\"🔧 TrustformeRS pipeline factory ready!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 📊 Text Classification\n",
"\n",
"Let's start with text classification - one of the most common NLP tasks:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a text classification pipeline\n",
"classifier = pipeline(\"text-classification\")\n",
"\n",
"# Test with individual examples\n",
"examples = [\n",
" \"I love this new transformer library!\",\n",
" \"This is the worst software I've ever used.\",\n",
" \"The weather is nice today.\",\n",
" \"TrustformeRS makes ML so much easier!\"\n",
"]\n",
"\n",
"print(\"🔍 Single Text Classification Examples:\")\n",
"print(\"=\" * 50)\n",
"\n",
"for text in examples:\n",
" result = classifier(text)\n",
" print(f\"Text: \\\"{text}\\\"\")\n",
" print(f\"Result: {result.label} ({result.score:.3f})\")\n",
" print()\n",
"\n",
"# Visualize results\n",
"labels = [classifier(text).label for text in examples]\n",
"scores = [classifier(text).score for text in examples]\n",
"\n",
"plt.figure(figsize=(12, 6))\n",
"colors = ['green' if label == 'POSITIVE' else 'red' if label == 'NEGATIVE' else 'gray' for label in labels]\n",
"bars = plt.bar(range(len(examples)), scores, color=colors, alpha=0.7)\n",
"plt.xlabel('Example')\n",
"plt.ylabel('Confidence Score')\n",
"plt.title('🎯 Text Classification Results')\n",
"plt.xticks(range(len(examples)), [f'Ex {i+1}' for i in range(len(examples))])\n",
"plt.ylim(0, 1)\n",
"\n",
"# Add value labels on bars\n",
"for i, (bar, score, label) in enumerate(zip(bars, scores, labels)):\n",
" plt.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.01, \n",
" f'{label}\\n{score:.3f}', ha='center', va='bottom')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 📦 Batch Processing\n",
"\n",
"For production use, batch processing is much more efficient:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Batch processing example\n",
"batch_texts = [\n",
" \"TrustformeRS is amazing for production ML!\",\n",
" \"I can't figure out how to use this library.\",\n",
" \"The documentation is comprehensive and helpful.\",\n",
" \"Performance is incredible compared to Python alternatives.\",\n",
" \"Installation was a bit tricky on my system.\"\n",
"]\n",
"\n",
"print(\"📦 Batch Processing Demo:\")\n",
"print(\"=\" * 30)\n",
"\n",
"# Time the batch processing\n",
"start_time = time.time()\n",
"batch_results = classifier.batch(batch_texts)\n",
"batch_time = time.time() - start_time\n",
"\n",
"# Time individual processing for comparison\n",
"start_time = time.time()\n",
"individual_results = [classifier(text) for text in batch_texts]\n",
"individual_time = time.time() - start_time\n",
"\n",
"print(f\"⚡ Batch processing time: {batch_time:.3f}s\")\n",
"print(f\"🐌 Individual processing time: {individual_time:.3f}s\")\n",
"print(f\"🚀 Speedup: {individual_time/batch_time:.1f}x faster\")\n",
"print()\n",
"\n",
"# Display results in a nice table\n",
"print(\"Results:\")\n",
"print(\"-\" * 80)\n",
"for i, (text, result) in enumerate(zip(batch_texts, batch_results)):\n",
" print(f\"{i+1:2d}. {text[:50]:50s} | {result.label:8s} ({result.score:.3f})\")\n",
"\n",
"# Visualize batch results\n",
"fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))\n",
"\n",
"# Sentiment distribution\n",
"sentiment_counts = {}\n",
"for result in batch_results:\n",
" sentiment_counts[result.label] = sentiment_counts.get(result.label, 0) + 1\n",
"\n",
"ax1.pie(sentiment_counts.values(), labels=sentiment_counts.keys(), autopct='%1.1f%%', startangle=90)\n",
"ax1.set_title('📊 Sentiment Distribution')\n",
"\n",
"# Confidence scores\n",
"ax2.hist([result.score for result in batch_results], bins=10, alpha=0.7, color='skyblue')\n",
"ax2.set_xlabel('Confidence Score')\n",
"ax2.set_ylabel('Frequency')\n",
"ax2.set_title('📈 Confidence Score Distribution')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ✍️ Text Generation\n",
"\n",
"Now let's explore text generation capabilities:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a text generation pipeline\n",
"generator = pipeline(\"text-generation\", \"gpt2\")\n",
"\n",
"# Creative writing prompts\n",
"prompts = [\n",
" \"The future of artificial intelligence\",\n",
" \"Once upon a time in a distant galaxy\",\n",
" \"The benefits of renewable energy include\",\n",
" \"In the world of machine learning\"\n",
"]\n",
"\n",
"print(\"✍️ Text Generation Examples:\")\n",
"print(\"=\" * 40)\n",
"\n",
"generated_texts = []\n",
"for i, prompt in enumerate(prompts):\n",
" print(f\"\\n{i+1}. Prompt: \\\"{prompt}\\\"\")\n",
" \n",
" # Generate with different max lengths\n",
" for max_len in [20, 40]:\n",
" result = generator(prompt, max_length=max_len)\n",
" print(f\" Max Length {max_len:2d}: {result.generated_text}\")\n",
" generated_texts.append(result.generated_text)\n",
"\n",
"# Analyze generation statistics\n",
"word_counts = [len(text.split()) for text in generated_texts]\n",
"char_counts = [len(text) for text in generated_texts]\n",
"\n",
"plt.figure(figsize=(12, 4))\n",
"\n",
"plt.subplot(1, 2, 1)\n",
"plt.bar(range(len(word_counts)), word_counts, color='lightcoral')\n",
"plt.xlabel('Generation')\n",
"plt.ylabel('Word Count')\n",
"plt.title('📊 Generated Text Length (Words)')\n",
"plt.xticks(range(len(word_counts)), [f'Gen {i+1}' for i in range(len(word_counts))], rotation=45)\n",
"\n",
"plt.subplot(1, 2, 2)\n",
"plt.bar(range(len(char_counts)), char_counts, color='lightblue')\n",
"plt.xlabel('Generation')\n",
"plt.ylabel('Character Count')\n",
"plt.title('📊 Generated Text Length (Characters)')\n",
"plt.xticks(range(len(char_counts)), [f'Gen {i+1}' for i in range(len(char_counts))], rotation=45)\n",
"\n",
"plt.tight_layout()\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🚀 Chapter 2: Advanced Features\n",
"\n",
"Let's explore some of TrustformeRS's advanced capabilities!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🎛️ Pipeline Configuration\n",
"\n",
"TrustformeRS provides extensive configuration options for fine-tuning performance:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Mock configuration classes\n",
"@dataclass\n",
"class PipelineConfig:\n",
" batch_size: int = 16\n",
" max_length: int = 512\n",
" device: str = \"auto\"\n",
" precision: str = \"fp16\"\n",
" enable_caching: bool = True\n",
" enable_jit: bool = False\n",
"\n",
"@dataclass\n",
"class GenerationConfig:\n",
" max_length: int = 50\n",
" temperature: float = 1.0\n",
" top_k: int = 50\n",
" top_p: float = 1.0\n",
" do_sample: bool = True\n",
" num_return_sequences: int = 1\n",
"\n",
"# Create configurations for different use cases\n",
"configs = {\n",
" \"fast\": PipelineConfig(batch_size=32, precision=\"fp16\", enable_jit=True),\n",
" \"accurate\": PipelineConfig(batch_size=8, precision=\"fp32\", enable_jit=False),\n",
" \"memory_efficient\": PipelineConfig(batch_size=4, max_length=256, enable_caching=False)\n",
"}\n",
"\n",
"print(\"🎛️ Pipeline Configuration Options:\")\n",
"print(\"=\" * 40)\n",
"\n",
"for name, config in configs.items():\n",
" print(f\"\\n{name.upper()} Configuration:\")\n",
" for field, value in config.__dict__.items():\n",
" print(f\" {field:15s}: {value}\")\n",
"\n",
"# Simulate performance comparison\n",
"performance_data = {\n",
" \"fast\": {\"latency\": 25, \"memory\": 1800, \"accuracy\": 0.92},\n",
" \"accurate\": {\"latency\": 45, \"memory\": 2400, \"accuracy\": 0.96},\n",
" \"memory_efficient\": {\"latency\": 35, \"memory\": 1200, \"accuracy\": 0.94}\n",
"}\n",
"\n",
"# Visualize configuration trade-offs\n",
"fig, axes = plt.subplots(1, 3, figsize=(15, 5))\n",
"\n",
"metrics = ['latency', 'memory', 'accuracy']\n",
"titles = ['⚡ Latency (ms)', '💾 Memory (MB)', '🎯 Accuracy']\n",
"colors = ['red', 'blue', 'green']\n",
"\n",
"for i, (metric, title, color) in enumerate(zip(metrics, titles, colors)):\n",
" values = [performance_data[config][metric] for config in configs.keys()]\n",
" axes[i].bar(configs.keys(), values, color=color, alpha=0.7)\n",
" axes[i].set_title(title)\n",
" axes[i].set_ylabel('Value')\n",
" \n",
" # Add value labels\n",
" for j, v in enumerate(values):\n",
" axes[i].text(j, v + max(values) * 0.01, f'{v}', ha='center', va='bottom')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()\n",
"\n",
"print(\"\\n💡 Configuration Tips:\")\n",
"print(\"- Use 'fast' for real-time applications\")\n",
"print(\"- Use 'accurate' for research and high-quality results\")\n",
"print(\"- Use 'memory_efficient' for edge deployment or limited resources\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔄 Streaming Processing\n",
"\n",
"TrustformeRS supports streaming for real-time applications:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import time\n",
"from IPython.display import display, clear_output\n",
"import threading\n",
"\n",
"class StreamingGenerator:\n",
" def __init__(self, base_generator):\n",
" self.base_generator = base_generator\n",
" \n",
" def stream_generate(self, prompt: str, max_tokens: int = 20):\n",
" \"\"\"Simulate streaming text generation\"\"\"\n",
" # In real TrustformeRS, this would yield tokens as they're generated\n",
" words = [\"Machine\", \"learning\", \"is\", \"revolutionizing\", \"how\", \"we\", \"interact\", \n",
" \"with\", \"technology\", \"and\", \"data\", \"in\", \"unprecedented\", \"ways.\"]\n",
" \n",
" generated_text = prompt\n",
" for i, word in enumerate(words[:max_tokens]):\n",
" if i > 0:\n",
" yield \" \" + word\n",
" else:\n",
" yield \" \" + word\n",
" time.sleep(0.2) # Simulate generation delay\n",
"\n",
"# Create streaming generator\n",
"streaming_gen = StreamingGenerator(generator)\n",
"\n",
"print(\"🌊 Streaming Text Generation Demo:\")\n",
"print(\"=\" * 35)\n",
"\n",
"prompt = \"The future of AI\"\n",
"print(f\"Prompt: '{prompt}'\")\n",
"print(\"Generated text will appear word by word...\\n\")\n",
"\n",
"# Simulate streaming output\n",
"current_text = prompt\n",
"print(f\"Output: {current_text}\", end=\"\", flush=True)\n",
"\n",
"for token in streaming_gen.stream_generate(prompt, max_tokens=10):\n",
" current_text += token\n",
" print(token, end=\"\", flush=True)\n",
"\n",
"print(\"\\n\\n✅ Streaming complete!\")\n",
"\n",
"# Streaming vs Batch comparison\n",
"print(\"\\n📊 Streaming vs Batch Processing:\")\n",
"print(\"-\" * 40)\n",
"\n",
"comparison_data = {\n",
" 'Method': ['Streaming', 'Batch'],\n",
" 'Time to First Token (ms)': [50, 200],\n",
" 'Total Time (ms)': [2000, 1800],\n",
" 'Memory Usage (MB)': [800, 1200],\n",
" 'User Experience': ['Real-time', 'Wait then see all']\n",
"}\n",
"\n",
"for key, values in comparison_data.items():\n",
" print(f\"{key:25s}: {str(values[0]):12s} vs {str(values[1]):12s}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## ⚡ Chapter 3: Performance Optimization\n",
"\n",
"TrustformeRS provides several optimization techniques to maximize performance:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 🔥 JIT Compilation\n",
"\n",
"Just-in-time compilation can significantly speed up inference:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Mock JIT compilation system\n",
"class JITOptimizer:\n",
" def __init__(self):\n",
" self.compilation_cache = {}\n",
" self.enabled = False\n",
" \n",
" def enable(self):\n",
" self.enabled = True\n",
" print(\"🔥 JIT compilation enabled\")\n",
" \n",
" def compile_pipeline(self, pipeline_type: str):\n",
" if pipeline_type not in self.compilation_cache:\n",
" print(f\"⚙️ Compiling {pipeline_type} pipeline...\")\n",
" time.sleep(1) # Simulate compilation time\n",
" self.compilation_cache[pipeline_type] = True\n",
" print(f\"✅ {pipeline_type} pipeline compiled\")\n",
" else:\n",
" print(f\"💾 Using cached compilation for {pipeline_type}\")\n",
" \n",
" def benchmark_inference(self, pipeline_type: str, num_runs: int = 100):\n",
" \"\"\"Benchmark inference with and without JIT\"\"\"\n",
" # Simulate performance improvements\n",
" base_latency = random.uniform(80, 120) # ms\n",
" \n",
" if self.enabled and pipeline_type in self.compilation_cache:\n",
" jit_latency = base_latency * 0.6 # 40% improvement\n",
" return {\n",
" 'without_jit': base_latency,\n",
" 'with_jit': jit_latency,\n",
" 'speedup': base_latency / jit_latency\n",
" }\n",
" else:\n",
" return {\n",
" 'without_jit': base_latency,\n",
" 'with_jit': None,\n",
" 'speedup': 1.0\n",
" }\n",
"\n",
"# Performance optimization demo\n",
"print(\"🚀 JIT Compilation Performance Demo\")\n",
"print(\"=\" * 40)\n",
"\n",
"jit_optimizer = JITOptimizer()\n",
"\n",
"# Benchmark without JIT\n",
"print(\"\\n1. Baseline Performance (No JIT):\")\n",
"baseline_results = {}\n",
"pipeline_types = ['text-classification', 'text-generation', 'question-answering']\n",
"\n",
"for pipeline_type in pipeline_types:\n",
" result = jit_optimizer.benchmark_inference(pipeline_type)\n",
" baseline_results[pipeline_type] = result['without_jit']\n",
" print(f\" {pipeline_type:20s}: {result['without_jit']:.1f} ms\")\n",
"\n",
"# Enable JIT and compile\n",
"print(\"\\n2. Enabling JIT Compilation:\")\n",
"jit_optimizer.enable()\n",
"\n",
"for pipeline_type in pipeline_types:\n",
" jit_optimizer.compile_pipeline(pipeline_type)\n",
"\n",
"# Benchmark with JIT\n",
"print(\"\\n3. Performance with JIT:\")\n",
"jit_results = {}\n",
"speedups = {}\n",
"\n",
"for pipeline_type in pipeline_types:\n",
" result = jit_optimizer.benchmark_inference(pipeline_type)\n",
" jit_results[pipeline_type] = result['with_jit']\n",
" speedups[pipeline_type] = result['speedup']\n",
" print(f\" {pipeline_type:20s}: {result['with_jit']:.1f} ms ({result['speedup']:.1f}x speedup)\")\n",
"\n",
"# Visualize performance improvements\n",
"fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
"\n",
"# Latency comparison\n",
"x = np.arange(len(pipeline_types))\n",
"width = 0.35\n",
"\n",
"baseline_values = [baseline_results[pt] for pt in pipeline_types]\n",
"jit_values = [jit_results[pt] for pt in pipeline_types]\n",
"\n",
"ax1.bar(x - width/2, baseline_values, width, label='Without JIT', color='lightcoral')\n",
"ax1.bar(x + width/2, jit_values, width, label='With JIT', color='lightgreen')\n",
"\n",
"ax1.set_xlabel('Pipeline Type')\n",
"ax1.set_ylabel('Latency (ms)')\n",
"ax1.set_title('⚡ JIT Performance Comparison')\n",
"ax1.set_xticks(x)\n",
"ax1.set_xticklabels([pt.replace('-', '\\n') for pt in pipeline_types])\n",
"ax1.legend()\n",
"\n",
"# Speedup visualization\n",
"speedup_values = [speedups[pt] for pt in pipeline_types]\n",
"bars = ax2.bar(pipeline_types, speedup_values, color='gold')\n",
"ax2.set_xlabel('Pipeline Type')\n",
"ax2.set_ylabel('Speedup Factor')\n",
"ax2.set_title('🚀 JIT Speedup Factors')\n",
"ax2.set_xticklabels([pt.replace('-', '\\n') for pt in pipeline_types])\n",
"\n",
"# Add speedup labels\n",
"for bar, speedup in zip(bars, speedup_values):\n",
" ax2.text(bar.get_x() + bar.get_width()/2, bar.get_height() + 0.05,\n",
" f'{speedup:.1f}x', ha='center', va='bottom', fontweight='bold')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()\n",
"\n",
"print(f\"\\n💡 Average speedup: {np.mean(speedup_values):.1f}x\")\n",
"print(\"🎯 JIT compilation provides significant performance improvements!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 💾 Caching System\n",
"\n",
"TrustformeRS includes a sophisticated caching system to avoid redundant computations:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Mock caching system\n",
"class TrustformeRSCache:\n",
" def __init__(self, max_size: int = 1000):\n",
" self.cache = {}\n",
" self.max_size = max_size\n",
" self.hits = 0\n",
" self.misses = 0\n",
" \n",
" def get(self, key: str):\n",
" if key in self.cache:\n",
" self.hits += 1\n",
" return self.cache[key]\n",
" else:\n",
" self.misses += 1\n",
" return None\n",
" \n",
" def put(self, key: str, value):\n",
" if len(self.cache) >= self.max_size:\n",
" # Simple LRU: remove oldest item\n",
" oldest_key = next(iter(self.cache))\n",
" del self.cache[oldest_key]\n",
" self.cache[key] = value\n",
" \n",
" def get_stats(self):\n",
" total = self.hits + self.misses\n",
" hit_rate = self.hits / total if total > 0 else 0\n",
" return {\n",
" 'hits': self.hits,\n",
" 'misses': self.misses,\n",
" 'hit_rate': hit_rate,\n",
" 'cache_size': len(self.cache)\n",
" }\n",
"\n",
"class CachedPipeline:\n",
" def __init__(self, base_pipeline, cache_size: int = 100):\n",
" self.base_pipeline = base_pipeline\n",
" self.cache = TrustformeRSCache(cache_size)\n",
" \n",
" def __call__(self, text: str):\n",
" # Use text hash as cache key\n",
" cache_key = f\"classify_{hash(text)}\"\n",
" \n",
" # Check cache first\n",
" cached_result = self.cache.get(cache_key)\n",
" if cached_result is not None:\n",
" return cached_result\n",
" \n",
" # Compute result and cache it\n",
" result = self.base_pipeline(text)\n",
" self.cache.put(cache_key, result)\n",
" return result\n",
"\n",
"# Caching performance demo\n",
"print(\"💾 Caching System Performance Demo\")\n",
"print(\"=\" * 40)\n",
"\n",
"# Create cached pipeline\n",
"base_classifier = pipeline(\"text-classification\")\n",
"cached_classifier = CachedPipeline(base_classifier, cache_size=50)\n",
"\n",
"# Test data with some repeated texts\n",
"test_texts = [\n",
" \"I love TrustformeRS!\",\n",
" \"This library is amazing\",\n",
" \"Performance is incredible\",\n",
" \"I love TrustformeRS!\", # Repeat\n",
" \"Documentation is great\",\n",
" \"This library is amazing\", # Repeat\n",
" \"Easy to use and fast\",\n",
" \"Performance is incredible\", # Repeat\n",
" \"Rust is the future\",\n",
" \"I love TrustformeRS!\", # Repeat\n",
"]\n",
"\n",
"print(\"🔄 Processing texts with caching...\")\n",
"\n",
"# Process with timing\n",
"processing_times = []\n",
"for i, text in enumerate(test_texts):\n",
" start_time = time.time()\n",
" result = cached_classifier(text)\n",
" processing_time = (time.time() - start_time) * 1000 # Convert to ms\n",
" processing_times.append(processing_time)\n",
" \n",
" cache_status = \"💾 CACHE HIT\" if processing_time < 10 else \"🔄 COMPUTED\"\n",
" print(f\"{i+1:2d}. {text[:30]:30s} | {result.label:8s} | {processing_time:6.1f}ms | {cache_status}\")\n",
"\n",
"# Display cache statistics\n",
"stats = cached_classifier.cache.get_stats()\n",
"print(f\"\\n📊 Cache Statistics:\")\n",
"print(f\" Cache hits: {stats['hits']}\")\n",
"print(f\" Cache misses: {stats['misses']}\")\n",
"print(f\" Hit rate: {stats['hit_rate']:.1%}\")\n",
"print(f\" Cache size: {stats['cache_size']} items\")\n",
"\n",
"# Visualize cache performance\n",
"fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))\n",
"\n",
"# Processing time over requests\n",
"ax1.plot(range(1, len(processing_times) + 1), processing_times, 'bo-', linewidth=2, markersize=8)\n",
"ax1.set_xlabel('Request Number')\n",
"ax1.set_ylabel('Processing Time (ms)')\n",
"ax1.set_title('⚡ Processing Time per Request')\n",
"ax1.grid(True, alpha=0.3)\n",
"\n",
"# Cache hit/miss visualization\n",
"cache_data = ['Hit' if time < 10 else 'Miss' for time in processing_times]\n",
"hit_count = cache_data.count('Hit')\n",
"miss_count = cache_data.count('Miss')\n",
"\n",
"ax2.pie([hit_count, miss_count], labels=['Cache Hits', 'Cache Misses'], \n",
" autopct='%1.1f%%', colors=['lightgreen', 'lightcoral'], startangle=90)\n",
"ax2.set_title('💾 Cache Hit/Miss Ratio')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()\n",
"\n",
"# Performance improvement calculation\n",
"avg_cache_hit_time = np.mean([t for t in processing_times if t < 10])\n",
"avg_compute_time = np.mean([t for t in processing_times if t >= 10])\n",
"speedup = avg_compute_time / avg_cache_hit_time if avg_cache_hit_time > 0 else 1\n",
"\n",
"print(f\"\\n🚀 Performance Improvement:\")\n",
"print(f\" Average cache hit time: {avg_cache_hit_time:.1f}ms\")\n",
"print(f\" Average compute time: {avg_compute_time:.1f}ms\")\n",
"print(f\" Cache speedup: {speedup:.1f}x faster\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🔧 Chapter 4: Model Comparison and Selection\n",
"\n",
"TrustformeRS makes it easy to compare different models and select the best one for your use case:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Model comparison framework\n",
"class ModelComparator:\n",
" def __init__(self):\n",
" self.models = {}\n",
" self.benchmark_results = {}\n",
" \n",
" def add_model(self, name: str, pipeline):\n",
" self.models[name] = pipeline\n",
" print(f\"✅ Added model: {name}\")\n",
" \n",
" def benchmark_models(self, test_texts: List[str]):\n",
" print(\"🏁 Benchmarking models...\")\n",
" \n",
" for model_name, pipeline in self.models.items():\n",
" print(f\" Testing {model_name}...\")\n",
" \n",
" start_time = time.time()\n",
" results = [pipeline(text) for text in test_texts]\n",
" total_time = time.time() - start_time\n",
" \n",
" # Simulate model-specific performance characteristics\n",
" model_metrics = {\n",
" 'DistilBERT': {'latency': 45, 'memory': 1200, 'accuracy': 0.92, 'model_size': 66},\n",
" 'RoBERTa': {'latency': 78, 'memory': 2100, 'accuracy': 0.96, 'model_size': 125},\n",
" 'ALBERT': {'latency': 52, 'memory': 1400, 'accuracy': 0.94, 'model_size': 12}\n",
" }\n",
" \n",
" base_metrics = model_metrics.get(model_name, model_metrics['DistilBERT'])\n",
" \n",
" self.benchmark_results[model_name] = {\n",
" 'avg_latency': base_metrics['latency'] + random.uniform(-5, 5),\n",
" 'memory_usage': base_metrics['memory'] + random.uniform(-50, 50),\n",
" 'accuracy': base_metrics['accuracy'] + random.uniform(-0.02, 0.02),\n",
" 'model_size_mb': base_metrics['model_size'],\n",
" 'throughput': len(test_texts) / (base_metrics['latency'] / 1000),\n",
" 'results': results\n",
" }\n",
" \n",
" def get_comparison_table(self):\n",
" return self.benchmark_results\n",
" \n",
" def recommend_model(self, priority: str = \"balanced\"):\n",
" if not self.benchmark_results:\n",
" return None\n",
" \n",
" scores = {}\n",
" \n",
" for model_name, metrics in self.benchmark_results.items():\n",
" if priority == \"speed\":\n",
" # Higher throughput and lower latency is better\n",
" score = (metrics['throughput'] / 10) + (100 / metrics['avg_latency'])\n",
" elif priority == \"accuracy\":\n",
" # Higher accuracy is better\n",
" score = metrics['accuracy'] * 100\n",
" elif priority == \"memory\":\n",
" # Lower memory usage is better\n",
" score = 10000 / metrics['memory_usage']\n",
" else: # balanced\n",
" # Weighted combination of all factors\n",
" score = (metrics['accuracy'] * 40 + \n",
" (metrics['throughput'] / 10) * 20 + \n",
" (100 / metrics['avg_latency']) * 20 + \n",
" (1000 / metrics['memory_usage']) * 20)\n",
" \n",
" scores[model_name] = score\n",
" \n",
" best_model = max(scores, key=scores.get)\n",
" return best_model, scores\n",
"\n",
"# Model comparison demo\n",
"print(\"🔬 Model Comparison Demo\")\n",
"print(\"=\" * 30)\n",
"\n",
"# Create different \"models\" (simulated)\n",
"comparator = ModelComparator()\n",
"\n",
"# Add models to compare\n",
"models_to_compare = {\n",
" \"DistilBERT\": pipeline(\"text-classification\", \"distilbert-base-uncased\"),\n",
" \"RoBERTa\": pipeline(\"text-classification\", \"roberta-base\"),\n",
" \"ALBERT\": pipeline(\"text-classification\", \"albert-base\")\n",
"}\n",
"\n",
"for name, model in models_to_compare.items():\n",
" comparator.add_model(name, model)\n",
"\n",
"# Test texts for comparison\n",
"comparison_texts = [\n",
" \"TrustformeRS is an excellent ML library\",\n",
" \"I'm having trouble with this software\",\n",
" \"The performance improvements are remarkable\",\n",
" \"Documentation could be better\",\n",
" \"Rust makes everything so much faster\"\n",
"]\n",
"\n",
"# Run benchmarks\n",
"comparator.benchmark_models(comparison_texts)\n",
"\n",
"# Display results\n",
"results = comparator.get_comparison_table()\n",
"\n",
"print(\"\\n📊 Benchmark Results:\")\n",
"print(\"-\" * 80)\n",
"print(f\"{'Model':<12} {'Latency(ms)':<12} {'Memory(MB)':<12} {'Accuracy':<10} {'Size(MB)':<10} {'Throughput':<10}\")\n",
"print(\"-\" * 80)\n",
"\n",
"for model_name, metrics in results.items():\n",
" print(f\"{model_name:<12} {metrics['avg_latency']:<12.1f} {metrics['memory_usage']:<12.0f} \"\n",
" f\"{metrics['accuracy']:<10.3f} {metrics['model_size_mb']:<10.0f} {metrics['throughput']:<10.1f}\")\n",
"\n",
"# Visualize comparison\n",
"fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 12))\n",
"\n",
"models = list(results.keys())\n",
"colors = ['lightblue', 'lightgreen', 'lightcoral']\n",
"\n",
"# Latency comparison\n",
"latencies = [results[model]['avg_latency'] for model in models]\n",
"ax1.bar(models, latencies, color=colors)\n",
"ax1.set_title('⚡ Average Latency')\n",
"ax1.set_ylabel('Latency (ms)')\n",
"for i, v in enumerate(latencies):\n",
" ax1.text(i, v + 1, f'{v:.1f}', ha='center', va='bottom')\n",
"\n",
"# Memory usage comparison\n",
"memory_usage = [results[model]['memory_usage'] for model in models]\n",
"ax2.bar(models, memory_usage, color=colors)\n",
"ax2.set_title('💾 Memory Usage')\n",
"ax2.set_ylabel('Memory (MB)')\n",
"for i, v in enumerate(memory_usage):\n",
" ax2.text(i, v + 20, f'{v:.0f}', ha='center', va='bottom')\n",
"\n",
"# Accuracy comparison\n",
"accuracies = [results[model]['accuracy'] for model in models]\n",
"ax3.bar(models, accuracies, color=colors)\n",
"ax3.set_title('🎯 Accuracy')\n",
"ax3.set_ylabel('Accuracy')\n",
"ax3.set_ylim(0.9, 1.0)\n",
"for i, v in enumerate(accuracies):\n",
" ax3.text(i, v + 0.002, f'{v:.3f}', ha='center', va='bottom')\n",
"\n",
"# Model size comparison\n",
"model_sizes = [results[model]['model_size_mb'] for model in models]\n",
"ax4.bar(models, model_sizes, color=colors)\n",
"ax4.set_title('📦 Model Size')\n",
"ax4.set_ylabel('Size (MB)')\n",
"for i, v in enumerate(model_sizes):\n",
" ax4.text(i, v + 2, f'{v:.0f}', ha='center', va='bottom')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()\n",
"\n",
"# Get recommendations for different priorities\n",
"print(\"\\n🎯 Model Recommendations:\")\n",
"print(\"-\" * 30)\n",
"\n",
"priorities = [\"speed\", \"accuracy\", \"memory\", \"balanced\"]\n",
"for priority in priorities:\n",
" best_model, scores = comparator.recommend_model(priority)\n",
" print(f\" {priority.upper():8s}: {best_model} (score: {scores[best_model]:.2f})\")\n",
"\n",
"print(\"\\n💡 Choose your model based on your requirements:\")\n",
"print(\" - DistilBERT: Good balance of speed and accuracy\")\n",
"print(\" - RoBERTa: Highest accuracy, but slower and more memory-intensive\")\n",
"print(\" - ALBERT: Smallest model size, reasonable performance\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🌟 Chapter 5: Real-World Applications\n",
"\n",
"Let's explore how to use TrustformeRS in real-world scenarios:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 📱 Social Media Sentiment Analysis\n",
"\n",
"A common use case is analyzing social media posts for sentiment:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Social media sentiment analysis demo\n",
"class SocialMediaAnalyzer:\n",
" def __init__(self):\n",
" self.classifier = pipeline(\"text-classification\", \"cardiffnlp/twitter-roberta-base-sentiment\")\n",
" self.analysis_results = []\n",
" \n",
" def analyze_posts(self, posts: List[Dict[str, str]]):\n",
" \"\"\"Analyze a batch of social media posts\"\"\"\n",
" print(f\"📱 Analyzing {len(posts)} social media posts...\")\n",
" \n",
" for post in posts:\n",
" result = self.classifier(post['text'])\n",
" \n",
" analysis = {\n",
" 'username': post['username'],\n",
" 'text': post['text'],\n",
" 'sentiment': result.label,\n",
" 'confidence': result.score,\n",
" 'timestamp': post.get('timestamp', 'unknown')\n",
" }\n",
" self.analysis_results.append(analysis)\n",
" \n",
" return self.analysis_results\n",
" \n",
" def get_summary_stats(self):\n",
" \"\"\"Get summary statistics of the analysis\"\"\"\n",
" if not self.analysis_results:\n",
" return {}\n",
" \n",
" sentiments = [r['sentiment'] for r in self.analysis_results]\n",
" sentiment_counts = {}\n",
" for sentiment in sentiments:\n",
" sentiment_counts[sentiment] = sentiment_counts.get(sentiment, 0) + 1\n",
" \n",
" avg_confidence = np.mean([r['confidence'] for r in self.analysis_results])\n",
" \n",
" return {\n",
" 'total_posts': len(self.analysis_results),\n",
" 'sentiment_distribution': sentiment_counts,\n",
" 'average_confidence': avg_confidence,\n",
" 'most_positive': max(self.analysis_results, key=lambda x: x['confidence'] if x['sentiment'] == 'POSITIVE' else 0),\n",
" 'most_negative': max(self.analysis_results, key=lambda x: x['confidence'] if x['sentiment'] == 'NEGATIVE' else 0)\n",
" }\n",
"\n",
"# Sample social media posts (simulated)\n",
"sample_posts = [\n",
" {\"username\": \"@tech_enthusiast\", \"text\": \"Just tried TrustformeRS and it's blazing fast! 🚀 #rust #ml\", \"timestamp\": \"2024-01-15 10:30\"},\n",
" {\"username\": \"@data_scientist\", \"text\": \"Struggling with memory issues in my ML pipeline 😞\", \"timestamp\": \"2024-01-15 11:15\"},\n",
" {\"username\": \"@ai_researcher\", \"text\": \"The performance improvements in TrustformeRS are incredible! Best decision for our production system.\", \"timestamp\": \"2024-01-15 12:00\"},\n",
" {\"username\": \"@startup_cto\", \"text\": \"Migration to Rust-based ML was painful but worth it\", \"timestamp\": \"2024-01-15 13:45\"},\n",
" {\"username\": \"@ml_engineer\", \"text\": \"Love how TrustformeRS handles large-scale inference ❤️\", \"timestamp\": \"2024-01-15 14:20\"},\n",
" {\"username\": \"@dev_community\", \"text\": \"Documentation is comprehensive and examples are clear 👍\", \"timestamp\": \"2024-01-15 15:30\"},\n",
" {\"username\": \"@python_dev\", \"text\": \"Miss the simplicity of Python libraries sometimes 😔\", \"timestamp\": \"2024-01-15 16:10\"},\n",
" {\"username\": \"@rust_lover\", \"text\": \"This is why Rust is the future of systems programming! 🦀\", \"timestamp\": \"2024-01-15 17:00\"}\n",
"]\n",
"\n",
"print(\"📱 Social Media Sentiment Analysis Demo\")\n",
"print(\"=\" * 45)\n",
"\n",
"# Create analyzer and process posts\n",
"analyzer = SocialMediaAnalyzer()\n",
"results = analyzer.analyze_posts(sample_posts)\n",
"\n",
"# Display individual results\n",
"print(\"\\n📊 Individual Post Analysis:\")\n",
"print(\"-\" * 80)\n",
"for result in results:\n",
" sentiment_emoji = \"😊\" if result['sentiment'] == 'POSITIVE' else \"😞\" if result['sentiment'] == 'NEGATIVE' else \"😐\"\n",
" print(f\"{result['username']:15s} | {sentiment_emoji} {result['sentiment']:8s} ({result['confidence']:.3f}) | {result['text'][:40]}...\")\n",
"\n",
"# Get and display summary statistics\n",
"summary = analyzer.get_summary_stats()\n",
"\n",
"print(f\"\\n📈 Summary Statistics:\")\n",
"print(f\" Total posts analyzed: {summary['total_posts']}\")\n",
"print(f\" Average confidence: {summary['average_confidence']:.3f}\")\n",
"print(\"\\n Sentiment Distribution:\")\n",
"for sentiment, count in summary['sentiment_distribution'].items():\n",
" percentage = (count / summary['total_posts']) * 100\n",
" print(f\" {sentiment:8s}: {count:2d} posts ({percentage:5.1f}%)\")\n",
"\n",
"# Visualize results\n",
"fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 12))\n",
"\n",
"# Sentiment distribution pie chart\n",
"sentiment_counts = summary['sentiment_distribution']\n",
"colors_pie = ['lightgreen', 'lightcoral', 'lightgray']\n",
"ax1.pie(sentiment_counts.values(), labels=sentiment_counts.keys(), autopct='%1.1f%%', \n",
" colors=colors_pie, startangle=90)\n",
"ax1.set_title('😊 Sentiment Distribution')\n",
"\n",
"# Confidence scores distribution\n",
"confidences = [r['confidence'] for r in results]\n",
"ax2.hist(confidences, bins=10, alpha=0.7, color='skyblue', edgecolor='black')\n",
"ax2.set_xlabel('Confidence Score')\n",
"ax2.set_ylabel('Frequency')\n",
"ax2.set_title('📊 Confidence Score Distribution')\n",
"\n",
"# Timeline of sentiments (by post order)\n",
"timeline_sentiments = [r['sentiment'] for r in results]\n",
"timeline_colors = ['green' if s == 'POSITIVE' else 'red' if s == 'NEGATIVE' else 'gray' for s in timeline_sentiments]\n",
"ax3.scatter(range(len(results)), [1 if s == 'POSITIVE' else -1 if s == 'NEGATIVE' else 0 for s in timeline_sentiments], \n",
" c=timeline_colors, s=100, alpha=0.7)\n",
"ax3.set_xlabel('Post Order')\n",
"ax3.set_ylabel('Sentiment')\n",
"ax3.set_title('📈 Sentiment Timeline')\n",
"ax3.set_yticks([-1, 0, 1])\n",
"ax3.set_yticklabels(['Negative', 'Neutral', 'Positive'])\n",
"ax3.grid(True, alpha=0.3)\n",
"\n",
"# Word cloud simulation (showing most common words)\n",
"all_text = ' '.join([r['text'] for r in results])\n",
"words = all_text.lower().split()\n",
"# Filter out common words and count frequencies\n",
"stop_words = {'the', 'and', 'is', 'it', 'to', 'of', 'in', 'for', 'with', 'on', 'a', 'an'}\n",
"word_freq = {}\n",
"for word in words:\n",
" word = word.strip('.,!?@#')\n",
" if word not in stop_words and len(word) > 2:\n",
" word_freq[word] = word_freq.get(word, 0) + 1\n",
"\n",
"# Display top words\n",
"top_words = sorted(word_freq.items(), key=lambda x: x[1], reverse=True)[:10]\n",
"words_list, counts_list = zip(*top_words) if top_words else ([], [])\n",
"\n",
"ax4.barh(range(len(words_list)), counts_list, color='lightblue')\n",
"ax4.set_yticks(range(len(words_list)))\n",
"ax4.set_yticklabels(words_list)\n",
"ax4.set_xlabel('Frequency')\n",
"ax4.set_title('🔤 Most Common Words')\n",
"\n",
"plt.tight_layout()\n",
"plt.show()\n",
"\n",
"print(\"\\n💡 Insights:\")\n",
"positive_pct = (sentiment_counts.get('POSITIVE', 0) / summary['total_posts']) * 100\n",
"if positive_pct > 60:\n",
" print(f\" 📈 Overall sentiment is very positive ({positive_pct:.1f}% positive posts)\")\n",
"elif positive_pct > 40:\n",
" print(f\" 📊 Sentiment is balanced ({positive_pct:.1f}% positive posts)\")\n",
"else:\n",
" print(f\" 📉 Overall sentiment tends negative ({positive_pct:.1f}% positive posts)\")\n",
"\n",
"print(f\" 🎯 Model confidence is {'high' if summary['average_confidence'] > 0.8 else 'moderate' if summary['average_confidence'] > 0.6 else 'low'} (avg: {summary['average_confidence']:.3f})\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🎓 Chapter 6: Best Practices and Tips\n",
"\n",
"Let's cover some best practices for using TrustformeRS effectively:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Best practices demonstration\n",
"class TrustformeRSBestPractices:\n",
" \n",
" @staticmethod\n",
" def demonstrate_error_handling():\n",
" \"\"\"Show proper error handling patterns\"\"\"\n",
" print(\"🛡️ Error Handling Best Practices:\")\n",
" print(\"-\" * 40)\n",
" \n",
" # Example: Robust pipeline creation\n",
" try:\n",
" # This would normally be: pipeline(\"text-classification\", \"invalid-model\")\n",
" print(\"✅ Always wrap pipeline creation in try-catch blocks\")\n",
" print(\"✅ Provide fallback models if primary model fails\")\n",
" print(\"✅ Log errors with context for debugging\")\n",
" except Exception as e:\n",
" print(f\"❌ Model loading failed: {e}\")\n",
" print(\"🔄 Falling back to default model...\")\n",
" \n",
" @staticmethod\n",
" def demonstrate_memory_management():\n",
" \"\"\"Show memory management best practices\"\"\"\n",
" print(\"\\n💾 Memory Management Best Practices:\")\n",
" print(\"-\" * 45)\n",
" \n",
" tips = [\n",
" \"🔧 Use appropriate batch sizes (start with 16, adjust based on GPU memory)\",\n",
" \"📦 Enable model caching for repeated inferences\",\n",
" \"🗑️ Explicitly clear cache when switching between many models\",\n",
" \"⚡ Use FP16 precision for inference when accuracy allows\",\n",
" \"📊 Monitor memory usage in production environments\",\n",
" \"🔄 Consider model quantization for edge deployment\"\n",
" ]\n",
" \n",
" for tip in tips:\n",
" print(f\" {tip}\")\n",
" \n",
" @staticmethod\n",
" def demonstrate_performance_optimization():\n",
" \"\"\"Show performance optimization strategies\"\"\"\n",
" print(\"\\n⚡ Performance Optimization Best Practices:\")\n",
" print(\"-\" * 50)\n",
" \n",
" optimizations = [\n",
" \"🔥 Enable JIT compilation for production workloads\",\n",
" \"📦 Use batch processing instead of individual requests\",\n",
" \"💾 Implement result caching for repeated queries\",\n",
" \"🎯 Choose the right model size for your latency requirements\",\n",
" \"🔧 Profile your application to identify bottlenecks\",\n",
" \"🌊 Use streaming for real-time applications\",\n",
" \"⚖️ Load balance across multiple model instances\"\n",
" ]\n",
" \n",
" for opt in optimizations:\n",
" print(f\" {opt}\")\n",
" \n",
" @staticmethod\n",
" def demonstrate_production_checklist():\n",
" \"\"\"Production deployment checklist\"\"\"\n",
" print(\"\\n🚀 Production Deployment Checklist:\")\n",
" print(\"-\" * 45)\n",
" \n",
" checklist = [\n",
" (\"Model Validation\", [\n",
" \"✅ Test model accuracy on validation dataset\",\n",
" \"✅ Verify model outputs for edge cases\",\n",
" \"✅ Check model compatibility with input formats\"\n",
" ]),\n",
" (\"Performance Testing\", [\n",
" \"✅ Load test with expected traffic patterns\",\n",
" \"✅ Measure latency percentiles (P50, P95, P99)\",\n",
" \"✅ Monitor memory usage under load\"\n",
" ]),\n",
" (\"Monitoring & Observability\", [\n",
" \"✅ Set up metrics collection (latency, throughput, errors)\",\n",
" \"✅ Configure alerting for anomalies\",\n",
" \"✅ Implement health checks and readiness probes\"\n",
" ]),\n",
" (\"Security & Compliance\", [\n",
" \"✅ Validate input sanitization\",\n",
" \"✅ Implement rate limiting\",\n",
" \"✅ Ensure data privacy compliance\"\n",
" ])\n",
" ]\n",
" \n",
" for category, items in checklist:\n",
" print(f\"\\n 📋 {category}:\")\n",
" for item in items:\n",
" print(f\" {item}\")\n",
" \n",
" @staticmethod\n",
" def demonstrate_common_pitfalls():\n",
" \"\"\"Common pitfalls to avoid\"\"\"\n",
" print(\"\\n⚠️ Common Pitfalls to Avoid:\")\n",
" print(\"-\" * 35)\n",
" \n",
" pitfalls = [\n",
" \"❌ Loading models in request handlers (load once, reuse many times)\",\n",
" \"❌ Not handling model loading failures gracefully\",\n",
" \"❌ Using synchronous calls in async applications\",\n",
" \"❌ Ignoring batch size optimization\",\n",
" \"❌ Not monitoring resource usage in production\",\n",
" \"❌ Hardcoding model paths (use configuration files)\",\n",
" \"❌ Not validating input data before inference\",\n",
" \"❌ Forgetting to set appropriate timeouts\"\n",
" ]\n",
" \n",
" for pitfall in pitfalls:\n",
" print(f\" {pitfall}\")\n",
"\n",
"# Run best practices demo\n",
"print(\"🎓 TrustformeRS Best Practices Guide\")\n",
"print(\"=\" * 40)\n",
"\n",
"best_practices = TrustformeRSBestPractices()\n",
"best_practices.demonstrate_error_handling()\n",
"best_practices.demonstrate_memory_management()\n",
"best_practices.demonstrate_performance_optimization()\n",
"best_practices.demonstrate_production_checklist()\n",
"best_practices.demonstrate_common_pitfalls()\n",
"\n",
"print(\"\\n🌟 Key Takeaways:\")\n",
"print(\"-\" * 20)\n",
"print(\"1. 🛡️ Always implement proper error handling and fallbacks\")\n",
"print(\"2. ⚡ Optimize for your specific use case (speed vs accuracy vs memory)\")\n",
"print(\"3. 📊 Monitor and measure everything in production\")\n",
"print(\"4. 🔧 Start simple, then optimize based on real performance data\")\n",
"print(\"5. 🚀 TrustformeRS provides the tools - use them wisely!\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 🎉 Conclusion\n",
"\n",
"Congratulations! You've completed the comprehensive TrustformeRS tutorial. You've learned:\n",
"\n",
"### ✅ What You've Accomplished\n",
"\n",
"1. **Basic Pipeline Usage** - Creating and using text classification, generation, and Q&A pipelines\n",
"2. **Advanced Features** - Configuration, streaming, and batch processing\n",
"3. **Performance Optimization** - JIT compilation, caching, and profiling\n",
"4. **Model Comparison** - Selecting the right model for your needs\n",
"5. **Real-World Applications** - Social media sentiment analysis\n",
"6. **Best Practices** - Production-ready deployment strategies\n",
"\n",
"### 🚀 Next Steps\n",
"\n",
"1. **Try the Examples** - Run the example scripts in the TrustformeRS repository\n",
"2. **Build Your Own Application** - Start with a simple use case and expand\n",
"3. **Join the Community** - Contribute to TrustformeRS development\n",
"4. **Read the Documentation** - Dive deeper into specific features\n",
"5. **Share Your Experience** - Help others learn TrustformeRS\n",
"\n",
"### 📚 Additional Resources\n",
"\n",
"- **GitHub Repository**: [https://github.com/trustformers/trustformers](https://github.com/trustformers/trustformers)\n",
"- **Documentation**: [https://trustformers.dev/docs](https://trustformers.dev/docs)\n",
"- **Examples**: [https://github.com/trustformers/examples](https://github.com/trustformers/examples)\n",
"- **Community**: [https://discord.gg/trustformers](https://discord.gg/trustformers)\n",
"\n",
"### 🤝 Get Involved\n",
"\n",
"TrustformeRS is an open-source project that welcomes contributions:\n",
"\n",
"- **Report Issues**: Found a bug? Let us know!\n",
"- **Suggest Features**: Have ideas for improvements?\n",
"- **Contribute Code**: Help make TrustformeRS even better\n",
"- **Write Documentation**: Share your knowledge with others\n",
"- **Create Tutorials**: Help newcomers get started\n",
"\n",
"Thank you for exploring TrustformeRS! 🎉🚀"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Final celebration! 🎉\n",
"print(\"🎉\" * 50)\n",
"print(\"🚀 Congratulations on completing the TrustformeRS tutorial! 🚀\")\n",
"print(\"🎉\" * 50)\n",
"print()\n",
"print(\"You're now ready to build amazing ML applications with TrustformeRS!\")\n",
"print(\"Go forth and create something awesome! 🌟\")\n",
"\n",
"# Generate a completion certificate\n",
"import datetime\n",
"\n",
"completion_date = datetime.datetime.now().strftime(\"%Y-%m-%d\")\n",
"certificate = f\"\"\"\n",
"┌─────────────────────────────────────────────────────────────┐\n",
"│ 🏆 CERTIFICATE OF COMPLETION 🏆 │\n",
"├─────────────────────────────────────────────────────────────┤\n",
"│ │\n",
"│ TrustformeRS Interactive Tutorial │\n",
"│ │\n",
"│ This certifies that the holder has successfully completed │\n",
"│ the comprehensive TrustformeRS tutorial and is now │\n",
"│ qualified to build high-performance ML applications │\n",
"│ using the TrustformeRS framework. │\n",
"│ │\n",
"│ Date: {completion_date:20s} │\n",
"│ │\n",
"│ 🚀 Happy coding with TrustformeRS! 🦀 │\n",
"└─────────────────────────────────────────────────────────────┘\n",
"\"\"\"\n",
"\n",
"print(certificate)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.0"
}
},
"nbformat": 4,
"nbformat_minor": 4
}