A “lossless” representation of an entire Rust file, preserving all
comments/whitespace (via InterstitialSegment) plus each recognized item’s
exact original text in LosslessItem.
An item with its exact text snippet.
We do not store a separate range because ConsolidatedItem already
has text_range() (e.g. for a CrateInterfaceItem, it is ci.text_range()).
An enum representing either a “single crate” handle or an entire workspace.
Use SingleOrWorkspace::detect(...) to try loading a workspace, and if it
fails specifically with WorkspaceError::ActuallyInSingleCrate {..} we
fallback to a single crate handle.
One pragmatic way to address “losing access to the crate or workspace handle”
is to store enough information inside each request so that, after the AI
expansions finish, you can locate and update the relevant Cargo.toml and README.
That can be done in at least two ways:
Trait for focusing on a single crate/workspace:
produce the subgraph of internal dependencies that lead up to the focus,
then return either a flat or layered topological ordering.
We add a new method to CrateHandleInterface so we can read file text from
an in-memory mock or from the real filesystem. For your real code,
you might implement it differently.
Minimal BFS-based function that adds crate_handle and all its internal dependencies
(recursively) into graph, returning the node index of crate_handle in that graph.
Updated build_new_use_lines to accept the trailing_comments map as a third parameter.
If your existing tests only pass two arguments, just add &BTreeMap::new() as the third.
Builds the final snippet for the “has_imports_line=true” scenario:
(a) old_top_macros first (with their leading comments),
(b) then non-macro lines,
(c) then new_top_macros.
The key fix: remove the trim_start() on the remainder.
We want to preserve leading blank lines in the remainder so that if
there was a blank line after some block comment, it stays in the final output.
Returns a TextRange for node excluding any leading/trailing
normal comments and whitespace. Doc comments (///, //!,
/**, /*! etc.) remain inside. Normal // or /* ... */ is trimmed.
“Dedent” lines by the minimal leading‐space among
all lines that have indent>0, ignoring blank or brace‐only lines
or lines that already have 0. If none found, does nothing.
Builds a string with one line per TopBlockMacro, preserving each macro’s leading comments
and ensuring each macro ends up on its own line (with no trailing newline).
Detects a same-line trailing // comment starting just after offset pos
(e.g. after the semicolon). If found, returns (comment_text, total_length)
so we can store that comment text and expand the removal range.
If none found, returns None.
A small helper that checks if crate_name starts with exactly one prefix + "-" from the
scanned groups. If we find exactly one match, returns that prefix. If zero or multiple, None.
Strip out { ... } if the entire snippet is enclosed in them.Split into lines (the caller typically does lines()).Normalize blank lines (trim leading/trailing, collapse consecutive).Dedent or clamp depending on your policy:either conditional_dedent_all if you want a minimal-based dedentor clamp_indent_at_4 if you want to ensure no line exceeds 4 spaces.
(Or optionally do both. Up to you.)
Gather all the raw attributes from a node (e.g. #[derive(Debug)], #[cfg(feature="xyz")],
possibly multiline like #[my_attr(\n Something\n)]), returning them as separate lines—
one per distinct Attr node. If you want to force each #[...] node into a single line
(removing internal newlines), you can strip out \n and \r. That way “multiline” attributes
become one joined line. This matches some test scenarios that expect one line per attribute.
Gathers all fn items from within an impl block, excluding any that
should_skip_item_fn determines should be skipped (e.g., private methods,
test methods if not .include_test_items(), or with #[some_other_attr], etc.).
Gathers Rust items (fn, struct, enum, mod, trait, etc.) from parent_node.
In older RA versions, top-level items appear in a SourceFile or ItemList.
We’ll check those first; if that fails, we fallback to iterating .children().
A helper that walks backward from item.syntax() to pick up line-comments,
similarly to what we do for old macros. If no separate tokens are found,
but the item’s own text starts with //, we parse those lines directly from
full_text as a fallback.
Recursively scans the directory root_dir for .rs files.
Returns an empty vector if root_dir doesn’t exist or isn’t a directory.
Skips any directories it can’t read.
Now we always return three items: (grouped_map, comment_map, trailing_comments).
If you have code/tests that only care about grouped_map & comment_map,
just destructure as:
let (grouped_map, comment_map, _trailing) = group_and_sort_uses(…);
ignoring the trailing_comments value.
A small helper to map AiFileFilterError => WorkspaceError.
You might prefer to define impl From<AiFileFilterError> for WorkspaceError
in your code. This is just a local approach.
Iterates over a TOML TeTable (i.e., [dependencies] or similar),
and pins any wildcard dependencies found. This ensures that workspace
dependencies do not remain * when building or releasing.
Pins a wildcard string dependency (e.g. serde = "*") from the lockfile if its version is *.
If no entry is found in the lockfile, logs a warning and leaves it as *.
Pins a wildcard sub-table dependency (e.g. [dependencies.somecrate] version="*", path="...").
If the dependency has a local path, uses GetVersionOfLocalDep to retrieve the version.
Otherwise, falls back to the lockfile.
Key idea: Use a higher-ranked trait bound (HRTB) so that the closure
can borrow &mut workspace and &crate_name for the duration of the
async future without lifetime conflicts.
We define a new helper function that is basically the same pattern as
run_with_workspace_and_crate_name, but it takes two crate name strings.
This avoids the lifetime conflict by using a higher-ranked trait bound.
Validates the provided TOML data, ensuring that a [package] section
is present and its fields meet certain criteria (e.g. valid SemVer version,
mandatory fields like authors, license, etc.).