Skip to content

[rustdoc] Do not emit redundant_explicit_links lint if the doc comment comes from expansion #141648

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 62 additions & 35 deletions compiler/rustc_resolve/src/rustdoc.rs
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,9 @@ pub struct DocFragment {
pub doc: Symbol,
pub kind: DocFragmentKind,
pub indent: usize,
/// Because we tamper with the spans context, this information cannot be correctly retrieved
/// later on. So instead, we compute it and store it here.
pub from_expansion: bool,
}

#[derive(Clone, Copy, Debug)]
Expand Down Expand Up @@ -208,17 +211,18 @@ pub fn attrs_to_doc_fragments<'a, A: AttributeExt + Clone + 'a>(
for (attr, item_id) in attrs {
if let Some((doc_str, comment_kind)) = attr.doc_str_and_comment_kind() {
let doc = beautify_doc_string(doc_str, comment_kind);
let (span, kind) = if attr.is_doc_comment() {
(attr.span(), DocFragmentKind::SugaredDoc)
let (span, kind, from_expansion) = if attr.is_doc_comment() {
let span = attr.span();
(span, DocFragmentKind::SugaredDoc, span.from_expansion())
Comment on lines 212 to +216
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually a bit odd to me, doc_str_and_comment_kind knows if a doc fragment is raw or a comment, but it then we throw that info away only to recalculate it immediately. These functions probably get inlined and probably llvm de-duplicates the match, but it's still a bit messy, especially considering this is the only call site of doc_str_and_comment_kind besides 1 place in clippy that only checks if the return value is None.

} else {
(
attr.value_span()
.map(|i| i.with_ctxt(attr.span().ctxt()))
.unwrap_or(attr.span()),
DocFragmentKind::RawDoc,
)
let attr_span = attr.span();
let (span, from_expansion) = match attr.value_span() {
Some(sp) => (sp.with_ctxt(attr_span.ctxt()), sp.from_expansion()),
None => (attr_span, attr_span.from_expansion()),
};
(span, DocFragmentKind::RawDoc, from_expansion)
};
let fragment = DocFragment { span, doc, kind, item_id, indent: 0 };
let fragment = DocFragment { span, doc, kind, item_id, indent: 0, from_expansion };
doc_fragments.push(fragment);
} else if !doc_only {
other_attrs.push(attr.clone());
Expand Down Expand Up @@ -500,17 +504,26 @@ fn collect_link_data<'input, F: BrokenLinkCallback<'input>>(
display_text.map(String::into_boxed_str)
}

/// Returns a span encompassing all the document fragments.
pub fn span_of_fragments(fragments: &[DocFragment]) -> Option<Span> {
if fragments.is_empty() {
return None;
}
let start = fragments[0].span;
if start == DUMMY_SP {
/// Returns a tuple containing a span encompassing all the document fragments and a boolean that is
/// `true` if any of the fragments are from a macro expansion.
pub fn span_of_fragments_with_expansion(fragments: &[DocFragment]) -> Option<(Span, bool)> {
let (first_fragment, last_fragment) = match fragments {
[] => return None,
[first, .., last] => (first, last),
[first] => (first, first),
};
if first_fragment.span == DUMMY_SP {
return None;
}
let end = fragments.last().expect("no doc strings provided").span;
Some(start.to(end))
Some((
first_fragment.span.to(last_fragment.span),
fragments.iter().any(|frag| frag.from_expansion),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This probably produces false negatives if all of the following are true:

  1. there is a macro that produces sugared doc comments
  2. there is a non-macro generated sugared doc comment containing a redundant explicit link
  3. there are no raw doc fragments on the item

This may be an acceptable regression, but I thought it was worth noting.

This could perhaps be made even more unlikely by trying to use the unambiguous substring heuristic first, or it could be eliminated completely by first calculating the final source span, then only checking the from_expansion of fragments that overlap with that source span.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest opening an issue once this PR is merged and send a PR with a regression test case. :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah you propose a solution below, copying it then.

))
}

/// Returns a span encompassing all the document fragments.
pub fn span_of_fragments(fragments: &[DocFragment]) -> Option<Span> {
span_of_fragments_with_expansion(fragments).map(|(sp, _)| sp)
}

/// Attempts to match a range of bytes from parsed markdown to a `Span` in the source code.
Expand All @@ -524,18 +537,22 @@ pub fn span_of_fragments(fragments: &[DocFragment]) -> Option<Span> {
/// This method will return `Some` only if one of the following is true:
///
/// - The doc is made entirely from sugared doc comments, which cannot contain escapes
/// - The doc is entirely from a single doc fragment with a string literal exactly equal to `markdown`.
/// - The doc is entirely from a single doc fragment with a string literal exactly equal to
/// `markdown`.
/// - The doc comes from `include_str!`
/// - The doc includes exactly one substring matching `markdown[md_range]` which is contained in a single doc fragment.
/// - The doc includes exactly one substring matching `markdown[md_range]` which is contained in a
/// single doc fragment.
///
/// This function is defined in the compiler so it can be used by both `rustdoc` and `clippy`.
///
/// This function is defined in the compiler so it can be used by
/// both `rustdoc` and `clippy`.
/// It returns a tuple containing a span encompassing all the document fragments and a boolean that
/// is `true` if any of the *matched* fragments are from a macro expansion.
pub fn source_span_for_markdown_range(
tcx: TyCtxt<'_>,
markdown: &str,
md_range: &Range<usize>,
fragments: &[DocFragment],
) -> Option<Span> {
) -> Option<(Span, bool)> {
let map = tcx.sess.source_map();
source_span_for_markdown_range_inner(map, markdown, md_range, fragments)
}
Expand All @@ -546,7 +563,7 @@ pub fn source_span_for_markdown_range_inner(
markdown: &str,
md_range: &Range<usize>,
fragments: &[DocFragment],
) -> Option<Span> {
) -> Option<(Span, bool)> {
use rustc_span::BytePos;

if let &[fragment] = &fragments
Expand All @@ -557,11 +574,14 @@ pub fn source_span_for_markdown_range_inner(
&& let Ok(md_range_hi) = u32::try_from(md_range.end)
{
// Single fragment with string that contains same bytes as doc.
return Some(Span::new(
fragment.span.lo() + rustc_span::BytePos(md_range_lo),
fragment.span.lo() + rustc_span::BytePos(md_range_hi),
fragment.span.ctxt(),
fragment.span.parent(),
return Some((
Span::new(
fragment.span.lo() + rustc_span::BytePos(md_range_lo),
fragment.span.lo() + rustc_span::BytePos(md_range_hi),
fragment.span.ctxt(),
fragment.span.parent(),
),
fragment.from_expansion,
));
}

Expand Down Expand Up @@ -593,19 +613,21 @@ pub fn source_span_for_markdown_range_inner(
{
match_data = Some((i, match_start));
} else {
// Heirustic produced ambiguity, return nothing.
// Heuristic produced ambiguity, return nothing.
return None;
}
}
}
if let Some((i, match_start)) = match_data {
let sp = fragments[i].span;
let fragment = &fragments[i];
let sp = fragment.span;
// we need to calculate the span start,
// then use that in our calulations for the span end
let lo = sp.lo() + BytePos(match_start as u32);
return Some(
return Some((
sp.with_lo(lo).with_hi(lo + BytePos((md_range.end - md_range.start) as u32)),
);
fragment.from_expansion,
));
}
return None;
}
Expand Down Expand Up @@ -659,8 +681,13 @@ pub fn source_span_for_markdown_range_inner(
}
}

Some(span_of_fragments(fragments)?.from_inner(InnerSpan::new(
let (span, _) = span_of_fragments_with_expansion(fragments)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can revert span_of_fragments_with_expansion to just be span_of_fragments again now, right?

any future potential users of it would probably be suspect to the same edge case as is present here, and having a useless iteration isn't ideal.

let src_span = span.from_inner(InnerSpan::new(
md_range.start + start_bytes,
md_range.end + start_bytes + end_bytes,
)))
));
Some((
src_span,
fragments.iter().any(|frag| frag.span.overlaps(src_span) && frag.from_expansion),
))
}
6 changes: 4 additions & 2 deletions compiler/rustc_resolve/src/rustdoc/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ use super::{DocFragment, DocFragmentKind, source_span_for_markdown_range_inner};
fn single_backtick() {
let sm = SourceMap::new(FilePathMapping::empty());
sm.new_source_file(PathBuf::from("foo.rs").into(), r#"#[doc = "`"] fn foo() {}"#.to_string());
let span = source_span_for_markdown_range_inner(
let (span, _) = source_span_for_markdown_range_inner(
&sm,
"`",
&(0..1),
Expand All @@ -20,6 +20,7 @@ fn single_backtick() {
kind: DocFragmentKind::RawDoc,
doc: sym::empty, // unused placeholder
indent: 0,
from_expansion: false,
}],
)
.unwrap();
Expand All @@ -32,7 +33,7 @@ fn utf8() {
// regression test for https://github.com/rust-lang/rust/issues/141665
let sm = SourceMap::new(FilePathMapping::empty());
sm.new_source_file(PathBuf::from("foo.rs").into(), r#"#[doc = "⚠"] fn foo() {}"#.to_string());
let span = source_span_for_markdown_range_inner(
let (span, _) = source_span_for_markdown_range_inner(
&sm,
"⚠",
&(0..3),
Expand All @@ -42,6 +43,7 @@ fn utf8() {
kind: DocFragmentKind::RawDoc,
doc: sym::empty, // unused placeholder
indent: 0,
from_expansion: false,
}],
)
.unwrap();
Expand Down
1 change: 1 addition & 0 deletions src/librustdoc/clean/types/tests.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ fn create_doc_fragment(s: &str) -> Vec<DocFragment> {
doc: Symbol::intern(s),
kind: DocFragmentKind::SugaredDoc,
indent: 0,
from_expansion: false,
}]
}

Expand Down
13 changes: 8 additions & 5 deletions src/librustdoc/passes/collect_intra_doc_links.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1387,13 +1387,15 @@ impl LinkCollector<'_, '_> {
ori_link: &MarkdownLinkRange,
item: &Item,
) {
let span = source_span_for_markdown_range(
let span = match source_span_for_markdown_range(
self.cx.tcx,
dox,
ori_link.inner_range(),
&item.attrs.doc_strings,
)
.unwrap_or_else(|| item.attr_span(self.cx.tcx));
) {
Some((sp, _)) => sp,
None => item.attr_span(self.cx.tcx),
};
rustc_session::parse::feature_err(
self.cx.tcx.sess,
sym::intra_doc_pointers,
Expand Down Expand Up @@ -1836,7 +1838,7 @@ fn report_diagnostic(
let mut md_range = md_range.clone();
let sp =
source_span_for_markdown_range(tcx, dox, &md_range, &item.attrs.doc_strings)
.map(|mut sp| {
.map(|(mut sp, _)| {
while dox.as_bytes().get(md_range.start) == Some(&b' ')
|| dox.as_bytes().get(md_range.start) == Some(&b'`')
{
Expand All @@ -1854,7 +1856,8 @@ fn report_diagnostic(
(sp, MarkdownLinkRange::Destination(md_range))
}
MarkdownLinkRange::WholeLink(md_range) => (
source_span_for_markdown_range(tcx, dox, md_range, &item.attrs.doc_strings),
source_span_for_markdown_range(tcx, dox, md_range, &item.attrs.doc_strings)
.map(|(sp, _)| sp),
link_range.clone(),
),
};
Expand Down
3 changes: 2 additions & 1 deletion src/librustdoc/passes/lint/bare_urls.rs
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,8 @@ use crate::html::markdown::main_body_opts;

pub(super) fn visit_item(cx: &DocContext<'_>, item: &Item, hir_id: HirId, dox: &str) {
let report_diag = |cx: &DocContext<'_>, msg: &'static str, range: Range<usize>| {
let maybe_sp = source_span_for_markdown_range(cx.tcx, dox, &range, &item.attrs.doc_strings);
let maybe_sp = source_span_for_markdown_range(cx.tcx, dox, &range, &item.attrs.doc_strings)
.map(|(sp, _)| sp);
let sp = maybe_sp.unwrap_or_else(|| item.attr_span(cx.tcx));
cx.tcx.node_span_lint(crate::lint::BARE_URLS, hir_id, sp, |lint| {
lint.primary_message(msg)
Expand Down
2 changes: 1 addition & 1 deletion src/librustdoc/passes/lint/check_code_block_syntax.rs
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ fn check_rust_syntax(
&code_block.range,
&item.attrs.doc_strings,
) {
Some(sp) => (sp, true),
Some((sp, _)) => (sp, true),
None => (item.attr_span(cx.tcx), false),
};

Expand Down
4 changes: 2 additions & 2 deletions src/librustdoc/passes/lint/html_tags.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ pub(crate) fn visit_item(cx: &DocContext<'_>, item: &Item, hir_id: HirId, dox: &
let tcx = cx.tcx;
let report_diag = |msg: String, range: &Range<usize>, is_open_tag: bool| {
let sp = match source_span_for_markdown_range(tcx, dox, range, &item.attrs.doc_strings) {
Some(sp) => sp,
Some((sp, _)) => sp,
None => item.attr_span(tcx),
};
tcx.node_span_lint(crate::lint::INVALID_HTML_TAGS, hir_id, sp, |lint| {
Expand Down Expand Up @@ -55,7 +55,7 @@ pub(crate) fn visit_item(cx: &DocContext<'_>, item: &Item, hir_id: HirId, dox: &
&(generics_start..generics_end),
&item.attrs.doc_strings,
) {
Some(sp) => sp,
Some((sp, _)) => sp,
None => item.attr_span(tcx),
};
// Sometimes, we only extract part of a path. For example, consider this:
Expand Down
49 changes: 39 additions & 10 deletions src/librustdoc/passes/lint/redundant_explicit_links.rs
Original file line number Diff line number Diff line change
Expand Up @@ -161,20 +161,35 @@ fn check_inline_or_reference_unknown_redundancy(

if dest_res == display_res {
let link_span =
source_span_for_markdown_range(cx.tcx, doc, &link_range, &item.attrs.doc_strings)
.unwrap_or(item.attr_span(cx.tcx));
let explicit_span = source_span_for_markdown_range(
match source_span_for_markdown_range(cx.tcx, doc, &link_range, &item.attrs.doc_strings)
{
Some((sp, from_expansion)) => {
if from_expansion {
return None;
}
sp
}
None => item.attr_span(cx.tcx),
};
let (explicit_span, from_expansion) = source_span_for_markdown_range(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this could also be let else

cx.tcx,
doc,
&offset_explicit_range(doc, link_range, open, close),
&item.attrs.doc_strings,
)?;
let display_span = source_span_for_markdown_range(
if from_expansion {
return None;
}
let (display_span, false) = source_span_for_markdown_range(
cx.tcx,
doc,
resolvable_link_range,
&item.attrs.doc_strings,
)?;
)?
else {
// This `span` comes from macro expansion so skipping it.
return None;
};

cx.tcx.node_span_lint(crate::lint::REDUNDANT_EXPLICIT_LINKS, hir_id, explicit_span, |lint| {
lint.primary_message("redundant explicit link target")
Expand Down Expand Up @@ -206,21 +221,35 @@ fn check_reference_redundancy(

if dest_res == display_res {
let link_span =
source_span_for_markdown_range(cx.tcx, doc, &link_range, &item.attrs.doc_strings)
.unwrap_or(item.attr_span(cx.tcx));
let explicit_span = source_span_for_markdown_range(
match source_span_for_markdown_range(cx.tcx, doc, &link_range, &item.attrs.doc_strings)
{
Some((sp, from_expansion)) => {
if from_expansion {
return None;
}
sp
}
None => item.attr_span(cx.tcx),
};
let (explicit_span, from_expansion) = source_span_for_markdown_range(
cx.tcx,
doc,
&offset_explicit_range(doc, link_range.clone(), b'[', b']'),
&item.attrs.doc_strings,
)?;
let display_span = source_span_for_markdown_range(
if from_expansion {
return None;
}
let (display_span, from_expansion) = source_span_for_markdown_range(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this could also also be let else

cx.tcx,
doc,
resolvable_link_range,
&item.attrs.doc_strings,
)?;
let def_span = source_span_for_markdown_range(
if from_expansion {
return None;
}
let (def_span, _) = source_span_for_markdown_range(
cx.tcx,
doc,
&offset_reference_def_range(doc, dest, link_range),
Expand Down
10 changes: 6 additions & 4 deletions src/librustdoc/passes/lint/unescaped_backticks.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,15 @@ pub(crate) fn visit_item(cx: &DocContext<'_>, item: &Item, hir_id: HirId, dox: &

// If we can't get a span of the backtick, because it is in a `#[doc = ""]` attribute,
// use the span of the entire attribute as a fallback.
let span = source_span_for_markdown_range(
let span = match source_span_for_markdown_range(
tcx,
dox,
&(backtick_index..backtick_index + 1),
&item.attrs.doc_strings,
)
.unwrap_or_else(|| item.attr_span(tcx));
) {
Some((sp, _)) => sp,
None => item.attr_span(tcx),
};

tcx.node_span_lint(crate::lint::UNESCAPED_BACKTICKS, hir_id, span, |lint| {
lint.primary_message("unescaped backtick");
Expand Down Expand Up @@ -419,7 +421,7 @@ fn suggest_insertion(
/// Maximum bytes of context to show around the insertion.
const CONTEXT_MAX_LEN: usize = 80;

if let Some(span) = source_span_for_markdown_range(
if let Some((span, _)) = source_span_for_markdown_range(
cx.tcx,
dox,
&(insert_index..insert_index),
Expand Down
Loading
Loading