rust_microservice/test.rs
1//! # 🔬 Test Environment Infrastructure
2//!
3//! This module provides utilities for bootstrapping and managing an isolated
4//! integration test environment powered by Docker containers and an async
5//! server runtime.
6//!
7//! It is designed to:
8//!
9//! - Provision and manage test containers (e.g., Postgres, Keycloak)
10//! - Coordinate initialization and shutdown across threads
11//! - Provide global access to running containers
12//! - Ensure deterministic teardown after tests complete
13//! - Offer structured logging with colored output
14//!
15//! The module is intended for integration and end-to-end testing scenarios
16//! where external dependencies must be provisioned dynamically.
17//!
18//! ---
19//!
20//! ## Architecture Overview
21//!
22//! The test environment follows a controlled lifecycle with three main phases:
23//!
24//! 1. **Setup Phase**
25//! - Initializes logging
26//! - Starts required containers
27//! - Executes optional post-initialization tasks
28//! - Signals readiness to the test runtime
29//!
30//! 2. **Execution Phase**
31//! - Tests run against the live server and provisioned services
32//! - Containers remain active and globally accessible
33//!
34//! 3. **Teardown Phase**
35//! - Receives shutdown signal
36//! - Stops and removes all registered containers
37//! - Releases global resources
38//!
39//! Synchronization between phases is handled using global channels and locks.
40//!
41//! ---
42//!
43//! ## Global Resource Management
44//!
45//! The module maintains global state using `OnceLock` to guarantee safe,
46//! single initialization across threads:
47//!
48//! - **Docker Client**
49//! A shared connection to the Docker daemon used for container inspection
50//! and removal.
51//!
52//! - **Container Registry**
53//! A global, thread-safe map storing container names and their IDs. This
54//! enables coordinated teardown after tests complete.
55//!
56//! - **Lifecycle Channels**
57//! Internal channels synchronize initialization completion and shutdown
58//! signals between threads.
59//!
60//! ---
61//!
62//! ## Logging
63//!
64//! Logging is automatically configured during setup using `env_logger`.
65//!
66//! Features:
67//!
68//! - Colored log levels
69//! - Timestamped output
70//! - Module-aware formatting
71//! - Environment-driven log filtering (`RUST_LOG`)
72//! - Suppression of noisy framework logs by default
73//!
74//! This improves readability during test execution and debugging.
75//!
76//! ---
77//!
78//! ## Public API
79//!
80//! ### `setup`
81//!
82//! Bootstraps the full test environment.
83//!
84//! Responsibilities:
85//!
86//! - Initializes logging
87//! - Executes user-provided initialization logic
88//! - Starts the application server
89//! - Runs optional post-initialization tasks
90//! - Blocks until environment is ready
91//!
92//! The initialization closure must return:
93//!
94//! - A collection of container handles (to prevent premature drop)
95//! - Application `Settings` used to start the server
96//!
97//! ### `teardown`
98//!
99//! Gracefully shuts down the environment by:
100//!
101//! - Sending a stop signal to all containers
102//! - Removing containers from Docker
103//! - Waiting for shutdown confirmation
104//!
105//! This function should be called once after all tests complete.
106//!
107//! ### `get_container`
108//!
109//! Retrieves metadata for a registered container by name using the Docker API.
110//!
111//! ### `add_container`
112//!
113//! Registers a container in the global container map so it can be stopped
114//! automatically during teardown.
115//!
116//! ---
117//!
118//! ## Blocking Execution Helper
119//!
120//! The module provides an internal utility for executing async code from
121//! synchronous contexts by creating a dedicated Tokio runtime. This is used
122//! primarily during container shutdown and cleanup.
123//!
124//! ---
125//!
126//! ## Container Utilities
127//!
128//! The `containers` submodule provides helpers for starting commonly used
129//! infrastructure services.
130//!
131//! ### Supported Services
132//!
133//! - **Postgres**
134//! Starts a database container with optional initialization scripts,
135//! network configuration, and credentials.
136//!
137//! - **Keycloak**
138//! Starts an identity provider container with realm import support and
139//! readiness checks.
140//!
141//! Each container:
142//!
143//! - Waits for readiness before returning
144//! - Registers itself for automatic teardown
145//! - Returns a connection URI for test usage
146//!
147//! ---
148//!
149//! ## Error Handling
150//!
151//! All operations use the module-specific `TestError` type, which captures:
152//!
153//! - Container creation failures
154//! - Filesystem path resolution errors
155//! - Custom runtime errors
156//!
157//! ---
158//!
159//! ## Typical Usage Pattern
160//!
161//! ```ignore
162//! #[ctor::ctor]
163//! pub fn setup() {
164//! rust_microservice::test::setup(
165//! async || {
166//! let mut settings = load_test_settings();
167//!
168//! // This vector serves as a workaround for Testcontainers’ automatic cleanup,
169//! // ensuring that containers remain available until all tests have completed.
170//! let mut containers: Vec<Box<dyn Any + Send>> = vec![];
171//!
172//! let postgres = start_postgres_container(&mut settings).await;
173//! if let Ok(postgres) = postgres {
174//! containers.push(Box::new(postgres.0));
175//! }
176//!
177//! let keycloak = start_keycloak_container(&mut settings).await;
178//! if let Ok(keycloak) = keycloak {
179//! containers.push(Box::new(keycloak.0));
180//! }
181//!
182//! (containers, settings)
183//! },
184//! || async {
185//! info!("Getting authorization token ...");
186//! let oauth2_token = get_auth_token().await.unwrap_or("".to_string());
187//! TOKEN.set(oauth2_token);
188//! info!("Authorization token: {}...", token()[..50].bright_blue());
189//! },
190//! );
191//! }
192//! ```
193//!
194//! Teardown is handled automatically by the `dtor` attribute.
195//! ```ignore
196//! #[ctor::dtor]
197//! pub fn teardown() {
198//! rust_microservice::test::teardown();
199//! }
200//! ```
201//!
202//! ---
203//!
204//! ## Concurrency Model
205//!
206//! The environment runs inside a dedicated multi-thread Tokio runtime
207//! spawned in a background thread. This allows synchronous test code to
208//! coordinate with async infrastructure without requiring async test
209//! functions.
210//!
211//! Communication is performed via channels that coordinate:
212//!
213//! - Initialization completion
214//! - Container stop commands
215//! - Shutdown confirmation
216//!
217//! ---
218//!
219//! ## Intended Use Cases
220//!
221//! This module is suitable for:
222//!
223//! - Integration testing
224//! - End-to-end testing
225//! - CI environments requiring ephemeral infrastructure
226//! - Local development with disposable dependencies
227//!
228//! It is not intended for production runtime container management.
229//!
230//! ---
231//!
232//! ## Safety Guarantees
233//!
234//! - Containers remain alive for the full test lifecycle
235//! - Teardown is deterministic and blocking
236//! - Global state is initialized exactly once
237//! - Async resources are properly awaited before shutdown
238//!
239//! ---
240//!
241//! ## Notes
242//!
243//! The environment assumes Docker is available and reachable using default
244//! configuration. Failure to connect to the Docker daemon will cause setup
245//! to abort.
246//!
247//! All containers are forcefully removed during teardown to ensure a clean
248//! test environment for subsequent runs.
249//!
250use colored::Colorize;
251use env_logger::{Builder, Env};
252use std::any::Any;
253use std::collections::HashMap;
254use std::sync::mpsc::{self, Receiver, Sender};
255use std::{io::Write, sync::OnceLock, thread};
256use testcontainers::bollard::Docker;
257use testcontainers::bollard::query_parameters::{
258 InspectContainerOptionsBuilder, RemoveContainerOptionsBuilder,
259};
260use testcontainers::bollard::secret::ContainerInspectResponse;
261use thiserror::Error;
262use tokio::sync::Mutex;
263use tokio::task;
264use tracing::info;
265
266use crate::Server;
267use crate::settings::Settings;
268
269pub type Result<T, E = TestError> = std::result::Result<T, E>;
270
271#[derive(Debug, Error)]
272pub enum TestError {
273 #[error("Failed to get absolute path for mount source : {0}")]
274 AbsolutePathConversion(String),
275
276 #[error("Failed to create container : {0}")]
277 ContainerCreation(String),
278
279 #[error("{0}")]
280 Custom(String),
281}
282
283enum ContainerCommands {
284 Stop,
285}
286
287struct Channel<T> {
288 tx: Sender<T>,
289 rx: Mutex<Receiver<T>>,
290}
291
292/// Creates a new channel for sending and receiving messages of type `T`.
293///
294/// Returns a `Channel<T>` containing a sender and a receiver for the created channel.
295fn channel<T>() -> Channel<T> {
296 let (tx, rx) = mpsc::channel();
297 Channel {
298 tx,
299 rx: Mutex::new(rx),
300 }
301}
302
303/// A static lock used to synchronize access to the container map.
304static CONTAINERS: OnceLock<Mutex<HashMap<String, String>>> = std::sync::OnceLock::new();
305
306// Holds a channel used to notify when initialization is complete.
307static CONTAINER_NOTIFIER_CHANNEL: OnceLock<Channel<ContainerCommands>> = OnceLock::new();
308fn container_notifier_channel() -> &'static Channel<ContainerCommands> {
309 CONTAINER_NOTIFIER_CHANNEL.get_or_init(channel)
310}
311
312// Holds a channel used to wait for shutdown notification from teardown function.
313static SHUTDOWN_NOTIFIER_CHANNEL: OnceLock<Channel<()>> = OnceLock::new();
314fn shutdown_notifier_channel() -> &'static Channel<()> {
315 SHUTDOWN_NOTIFIER_CHANNEL.get_or_init(channel)
316}
317
318// Holds a channel used to notify when initialization is complete.
319static INITIALIZE_NOTIFIER_CHANNEL: OnceLock<Channel<()>> = OnceLock::new();
320fn initialize_notifier_channel() -> &'static Channel<()> {
321 INITIALIZE_NOTIFIER_CHANNEL.get_or_init(channel)
322}
323
324// Holds a static lock used to synchronize access to the Docker client.
325static DOCKER_CLIENT: OnceLock<Docker> = OnceLock::new();
326pub(crate) fn docker_client() -> &'static Docker {
327 DOCKER_CLIENT.get_or_init(|| {
328 Docker::connect_with_defaults().expect("Failed to connect to Docker daemon.")
329 })
330}
331
332/// Retrieves a container by its name from the container map.
333///
334/// This function returns an `Option<ContainerInspectResponse>` containing the
335/// requested container, if found. Otherwise, it returns `None`.
336pub async fn get_container(name: &str) -> Option<ContainerInspectResponse> {
337 let container_id = CONTAINERS
338 .get_or_init(|| Mutex::new(HashMap::new()))
339 .lock()
340 .await
341 .get(name)
342 .cloned();
343
344 if let Some(container_id) = container_id {
345 let options = InspectContainerOptionsBuilder::default().build();
346 let res = docker_client()
347 .inspect_container(container_id.as_str(), Some(options))
348 .await;
349 if let Ok(data) = res {
350 return Some(data);
351 }
352 }
353
354 None
355}
356
357/// Adds a container to the container map.
358///
359/// This function takes a `name` and a `container_id` and inserts them into the container map.
360///
361/// The container map is a global, thread-safe map that stores container names as keys and
362/// container IDs as values.
363///
364/// This function returns no value, but it will block until the insertion is complete.
365pub async fn add_container(name: &str, id: String) {
366 CONTAINERS
367 .get_or_init(|| Mutex::new(HashMap::new()))
368 .lock()
369 .await
370 .insert(name.to_string(), id);
371}
372
373/// Sets up the test environment by initializing the log, printing an ASCII art
374/// banner, and spawning a new thread that will execute the given closure.
375///
376/// The closure should return a `Future` that will be executed in a blocking
377/// context. The `Future` should complete before the test environment is
378/// considered initialized.
379///
380/// The test environment will wait for the setup signal to be sent before
381/// proceeding with the tests. After the setup signal is received, the test
382/// environment will wait for the shutdown signal before shutting down.
383///
384/// The shutdown signal is sent after all containers have been stopped and
385/// removed.
386pub fn setup<F, P, Fut, PostFut>(init: F, post_init: P)
387where
388 F: FnOnce() -> Fut + Send + 'static,
389 P: FnOnce() -> PostFut + Send + 'static,
390 Fut: Future<Output = (Vec<Box<dyn Any + Send>>, Settings)> + Send + 'static,
391 PostFut: Future<Output = ()> + Send + 'static,
392{
393 configure_log();
394
395 let ascii_art = r#"
396 ____ __ __ _ ______ __
397 / _/___ / /_ ___ ___ _ ____ ___ _ / /_ (_)___ ___ /_ __/___ ___ / /_ ___
398 _/ / / _ \/ __// -_)/ _ `// __// _ `// __// // _ \ / _ \ / / / -_)(_-</ __/(_-<
399 /___//_//_/\__/ \__/ \_, //_/ \_,_/ \__//_/ \___//_//_/ /_/ \__//___/\__//___/
400 /___/
401 "#;
402 println!("{}", ascii_art);
403
404 info!("Initializing Test Environment ...");
405
406 //let shutdown_tx = shutdown_notifier_channel().tx.clone();
407 thread::spawn(move || {
408 let body = async move {
409 // This vector serves as a workaround for Testcontainers’ automatic cleanup,
410 // ensuring that containers remain available until all tests have completed.
411 let (mut _containers, settings) = init().await;
412
413 info!("Starting Server ...");
414 let result = Server::new_with_settings(settings).await;
415 match result {
416 Ok(server) => {
417 let result = server.intialize_database().await;
418 if let Ok(server) = result {
419 info!("{}", "Server started successfully!".bright_blue());
420 Server::set_global(server);
421 }
422 }
423 Err(e) => {
424 panic!("Failed to start server: {}", e);
425 }
426 }
427
428 info!("Processing Post Initialization Tasks...");
429 post_init().await;
430
431 // Send the setup signal to indicate that initialization is complete
432 // and the tests can proceed.
433 initialize_notifier_channel()
434 .tx
435 .send(())
436 .expect("Failed to send setup signal.");
437
438 // Wait for container commands (e.g., Stop) before shutting down.
439 let _ = task::spawn_blocking(move || {
440 let rx = container_notifier_channel()
441 .rx
442 .blocking_lock()
443 .recv()
444 .expect("Failed to receive container command notification.");
445
446 match rx {
447 ContainerCommands::Stop => {
448 info!("Shutting Down Test Environment. Stopping Containers...");
449
450 // Shutdown all containers
451 CONTAINERS
452 .get_or_init(|| Mutex::new(HashMap::new()))
453 .blocking_lock()
454 .iter()
455 .for_each(|(name, container)| {
456 execute_blocking(async || {
457 info!(
458 "Stopping Container with name {} and id {}",
459 name.bright_blue(),
460 (&container[..13]).bright_blue()
461 );
462 let opts = RemoveContainerOptionsBuilder::default()
463 .force(true)
464 .v(true)
465 .build();
466 let res = docker_client()
467 .remove_container(container, Some(opts))
468 .await;
469 if res.is_err() {
470 info!("Failed to remove container: {:?}", res);
471 }
472 });
473 });
474
475 info!("All containers have been successfully stopped.");
476 }
477 }
478 })
479 .await;
480
481 // This needs to be here otherwise the container did not call the drop
482 // function before the application stops.
483 shutdown_notifier_channel()
484 .tx
485 .send(())
486 .expect("Failed to send shutdown signal.");
487
488 info!(
489 "{}",
490 "The test environment has been shut down successfully.".bright_green()
491 );
492 };
493
494 tokio::runtime::Builder::new_multi_thread()
495 .enable_all()
496 .build()
497 .expect("Cannot create Tests Tokio Runtime.")
498 .block_on(body);
499 });
500
501 // Wait for the setup signal before proceeding.
502 initialize_notifier_channel()
503 .rx
504 .blocking_lock()
505 .recv()
506 .expect("Failed to receive setup signal.");
507
508 info!(
509 "{} {}",
510 "The test environment has been initialized successfully.",
511 "Starting Tests...".bright_green()
512 );
513}
514
515/// Shuts down the test environment by blocking on the shutdown signal.
516///
517/// This function is used to wait for the shutdown of the test environment.
518/// It blocks on the shutdown signal channel and waits for the signal to be sent.
519/// Once the signal is received, it returns, indicating that the test environment
520/// has been shut down.
521pub fn teardown() {
522 // Send the shutdown signal to containers
523 let _ = container_notifier_channel()
524 .tx
525 .send(ContainerCommands::Stop);
526
527 // Wait for the shutdown signal.
528 // This ensures that all containers have been properly shut down before app exits.
529 let guard = shutdown_notifier_channel().rx.try_lock();
530 if guard.is_err() {
531 panic!("Failed to receive shutdown signal.");
532 }
533
534 if let Ok(rx) = guard {
535 let _ = rx.recv();
536 }
537}
538
539/// Executes a given future in a blocking manner.
540///
541/// This function creates a new instance of the Tokio runtime and
542/// blocks on the given future, waiting for its completion.
543///
544/// This function is useful when you need to execute a future in a
545/// blocking manner, such as in tests or command-line applications.
546pub(crate) fn execute_blocking<F, Fut>(future: F)
547where
548 F: FnOnce() -> Fut,
549 Fut: Future<Output = ()>,
550{
551 let rt = tokio::runtime::Runtime::new().expect("Cannot create Tokio Runtime.");
552 rt.block_on(future());
553}
554
555/// Configures and initializes the application logger.
556///
557/// This method sets up the logger using environment variables, applying a default
558/// log level configuration when none is provided. It defines a custom log format
559/// with colored log levels, timestamps, module paths, and messages to improve
560/// readability during development and debugging.
561///
562/// # Behavior
563///
564/// - Uses `RUST_LOG` environment variable when available.
565/// - Defaults to `info` level and suppresses noisy logs from `actix_web`
566/// and `actix_web_prom`.
567/// - Applies colorized output based on the log level.
568/// - Formats log entries with timestamp, level, module path, and message.
569fn configure_log() {
570 // Initialize Logger ENV
571 let level = Env::default().default_filter_or("info,actix_web=error,actix_web_prom=error");
572
573 let _ = Builder::from_env(level)
574 .format(|buf, record| {
575 let level = match record.level() {
576 log::Level::Info => record.level().as_str().bright_green(),
577 log::Level::Debug => record.level().as_str().bright_blue(),
578 log::Level::Trace => record.level().as_str().bright_cyan(),
579 log::Level::Warn => record.level().as_str().bright_yellow(),
580 log::Level::Error => record.level().as_str().bright_red(),
581 };
582
583 let datetime = chrono::Local::now()
584 .format("%d-%m-%YT%H:%M:%S%.3f%:z")
585 .to_string()
586 .white();
587
588 // Align timestamp, level, and module path
589 writeln!(
590 buf,
591 "{:<24} {:<5} [{:<40}] - {}",
592 datetime, // Timestamp
593 level, // Log level
594 record.module_path().unwrap_or("unknown").blue(), // Module path
595 record.args() // Log message
596 )
597 })
598 .try_init();
599}
600
601pub mod containers {
602 use colored::Colorize;
603 use std::{fs, path::Path, time::Duration};
604
605 use testcontainers::{
606 ContainerAsync, CopyDataSource, GenericImage, ImageExt,
607 core::{Mount, WaitFor, ports::IntoContainerPort, wait::HttpWaitStrategy},
608 runners::AsyncRunner,
609 };
610 use testcontainers_modules::postgres::Postgres;
611 use tracing::{debug, info};
612
613 use crate::test::Result;
614 use crate::test::TestError;
615
616 /// Returns the absolute path of a given path string.
617 ///
618 /// # Errors
619 /// If the given path is not valid UTF-8, a `TestError::AbsolutePathConversion`
620 /// error is returned.
621 fn absolute_path(path: &str) -> Result<String> {
622 let path = Path::new(path)
623 .canonicalize()
624 .map_err(|e| TestError::AbsolutePathConversion(e.to_string()))?
625 .to_str()
626 .ok_or_else(|| {
627 TestError::AbsolutePathConversion("Path is not valid UTF-8".to_string())
628 })?
629 .to_string();
630 Ok(path)
631 }
632
633 /// Starts a Keycloak container with a given realm data path and network.
634 ///
635 /// # Parameters
636 ///
637 /// * `realm_data_path`: Optional. The path to the realm data JSON file to be imported.
638 /// * `network`: Optional. The network to use for the container. If `None`, the default
639 /// * network is used.
640 ///
641 /// # Returns
642 ///
643 /// A `Result` containing the URI of the Keycloak instance if successful, or a `TestError`
644 /// if an error occurred.
645 pub async fn keycloak(
646 realm_data_path: &str,
647 network: &str,
648 ) -> Result<(ContainerAsync<GenericImage>, String)> {
649 let realm =
650 fs::read(realm_data_path).map_err(|e| TestError::ContainerCreation(e.to_string()))?;
651
652 let container = GenericImage::new("quay.io/keycloak/keycloak", "26.5.2")
653 .with_exposed_port(8080.tcp())
654 .with_exposed_port(9000.tcp())
655 .with_wait_for(WaitFor::http(
656 HttpWaitStrategy::new("/health/ready")
657 .with_port(9000.into())
658 .with_expected_status_code(200u16),
659 ))
660 .with_cmd(vec!["start-dev", "--import-realm"])
661 //.with_reuse(ReuseDirective::Always)
662 .with_network(network)
663 .with_copy_to(
664 "/opt/keycloak/data/import/realm-export.json",
665 CopyDataSource::Data(realm),
666 )
667 .with_startup_timeout(Duration::from_secs(60))
668 .with_env_var("KC_BOOTSTRAP_ADMIN_USERNAME", "admin")
669 .with_env_var("KC_BOOTSTRAP_ADMIN_PASSWORD", "123456")
670 .with_env_var("KC_HTTP_ENABLED", "true")
671 .with_env_var("KC_HTTP_HOST", "0.0.0.0")
672 .with_env_var("KC_HEALTH_ENABLED", "true")
673 .with_env_var("KC_CACHE", "local")
674 .with_env_var("KC_FEATURES", "scripts")
675 .with_env_var("TZ", "America/Sao_Paulo")
676 .start()
677 .await
678 .map_err(|e| {
679 info!("Error starting Keycloak container: {}", e.to_string());
680 TestError::ContainerCreation(e.to_string())
681 })?;
682
683 let container_ip = container
684 .get_host()
685 .await
686 .map_err(|e| TestError::ContainerCreation(e.to_string()))?
687 .to_string();
688
689 let container_port = container
690 .get_host_port_ipv4(8080)
691 .await
692 .map_err(|e| TestError::ContainerCreation(e.to_string()))?
693 .to_string();
694
695 // IMPORTANT: Add container to global container map for teardown
696 super::add_container("keycloak", container.id().to_string()).await;
697
698 let uri = format!("http://{}:{}", container_ip, container_port);
699 debug!("Keycloak Connection URL: {}", uri.bright_blue());
700
701 Ok((container, uri))
702 }
703
704 /// Creates a Postgres container with a default configuration.
705 ///
706 /// This function creates a new Postgres container with a default configuration,
707 /// and returns the connection URL of the container. The container is added
708 /// to the global container map for teardown.
709 ///
710 /// # Parameters
711 ///
712 /// - `path`: The path to the Postgres image Docker mount source.
713 /// - `database`: The name of the Postgres database.
714 /// - `network`: The name of the Docker network to connect the container to.
715 ///
716 /// # Errors
717 ///
718 /// If an error occurs while creating the container, a `TestError::ContainerCreation`
719 /// error is returned. If an error occurs while getting the IP address or port of
720 /// the container, a `TestError::ContainerCreation` error is returned.
721 ///
722 /// # Returns
723 ///
724 /// A `String` containing the connection URL of the Postgres container.
725 pub async fn postgres(
726 path: Option<String>,
727 database: Option<String>,
728 network: Option<String>,
729 user: Option<String>,
730 password: Option<String>,
731 ) -> Result<(ContainerAsync<Postgres>, String)> {
732 info!("Creating Default Postgres Container...");
733
734 let mut entry_point = None;
735 if let Some(path) = path {
736 let mount_source_str = absolute_path(&path)?;
737 entry_point = Some(Mount::bind_mount(
738 mount_source_str.as_str(),
739 "/docker-entrypoint-initdb.d",
740 ));
741 }
742
743 let builder = Postgres::default()
744 .with_db_name(database.clone().unwrap_or("postgres".into()).as_str());
745
746 let mut builder = builder
747 .with_network(network.unwrap_or("test_network".into()))
748 //.with_reuse(ReuseDirective::Always)
749 .with_startup_timeout(Duration::from_secs(30));
750
751 if let Some(entry_point) = entry_point {
752 builder = builder.with_mount(entry_point);
753 }
754
755 let container = builder
756 .start()
757 .await
758 .map_err(|e| TestError::ContainerCreation(e.to_string()))?;
759
760 let container_ip = container
761 .get_host()
762 .await
763 .map_err(|e| TestError::ContainerCreation(e.to_string()))?;
764
765 let container_port = container
766 .get_host_port_ipv4(5432)
767 .await
768 .map_err(|e| TestError::ContainerCreation(e.to_string()))?;
769
770 let uri = format!(
771 "postgres://{}:{}@{}:{}/{}",
772 user.unwrap_or("postgres".into()),
773 password.unwrap_or("postgres".into()),
774 container_ip,
775 container_port,
776 database.unwrap_or("postgres".into())
777 );
778 debug!("Default Postgres Connection URL: {}", uri.bright_blue());
779
780 // IMPORTANT: Add container to global container map for teardown
781 super::add_container("postgres", container.id().to_string()).await;
782
783 Ok((container, uri))
784 }
785}