Skip to content

Commit 465c89f

Browse files
lewiszlwalamb
andauthored
Update github repo links (apache#10167)
* Update github repo link * Format markdown --------- Co-authored-by: Andrew Lamb <[email protected]>
1 parent 0b5bfe2 commit 465c89f

File tree

166 files changed

+6317
-6317
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

166 files changed

+6317
-6317
lines changed

.github/actions/setup-windows-builder/action.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ runs:
3838
- name: Setup Rust toolchain
3939
shell: bash
4040
run: |
41-
# Avoid self update to avoid CI failures: https://github.com/apache/arrow-datafusion/issues/9653
41+
# Avoid self update to avoid CI failures: https://github.com/apache/datafusion/issues/9653
4242
rustup toolchain install stable --no-self-update
4343
rustup default stable
4444
rustup component add rustfmt

.github/workflows/dev_pr.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ jobs:
3434
runs-on: ubuntu-latest
3535
# only run for users whose permissions allow them to update PRs
3636
# otherwise labeler is failing:
37-
# https://github.com/apache/arrow-datafusion/issues/3743
37+
# https://github.com/apache/datafusion/issues/3743
3838
permissions:
3939
contents: read
4040
pull-requests: write

.github/workflows/rust.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -425,7 +425,7 @@ jobs:
425425
ci/scripts/rust_fmt.sh
426426
427427
# Coverage job disabled due to
428-
# https://github.com/apache/arrow-datafusion/issues/3678
428+
# https://github.com/apache/datafusion/issues/3678
429429

430430
# coverage:
431431
# name: coverage

Cargo.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,10 +46,10 @@ resolver = "2"
4646
[workspace.package]
4747
authors = ["Apache Arrow <[email protected]>"]
4848
edition = "2021"
49-
homepage = "https://github.com/apache/arrow-datafusion"
49+
homepage = "https://github.com/apache/datafusion"
5050
license = "Apache-2.0"
5151
readme = "README.md"
52-
repository = "https://github.com/apache/arrow-datafusion"
52+
repository = "https://github.com/apache/datafusion"
5353
rust-version = "1.73"
5454
version = "37.1.0"
5555

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -27,22 +27,22 @@
2727
[crates-badge]: https://img.shields.io/crates/v/datafusion.svg
2828
[crates-url]: https://crates.io/crates/datafusion
2929
[license-badge]: https://img.shields.io/badge/license-Apache%20v2-blue.svg
30-
[license-url]: https://github.com/apache/arrow-datafusion/blob/main/LICENSE.txt
31-
[actions-badge]: https://github.com/apache/arrow-datafusion/actions/workflows/rust.yml/badge.svg
32-
[actions-url]: https://github.com/apache/arrow-datafusion/actions?query=branch%3Amain
30+
[license-url]: https://github.com/apache/datafusion/blob/main/LICENSE.txt
31+
[actions-badge]: https://github.com/apache/datafusion/actions/workflows/rust.yml/badge.svg
32+
[actions-url]: https://github.com/apache/datafusion/actions?query=branch%3Amain
3333
[discord-badge]: https://img.shields.io/discord/885562378132000778.svg?logo=discord&style=flat-square
3434
[discord-url]: https://discord.com/invite/Qw5gKqHxUM
3535

36-
[Website](https://github.com/apache/arrow-datafusion) |
37-
[Guides](https://github.com/apache/arrow-datafusion/tree/main/docs) |
36+
[Website](https://github.com/apache/datafusion) |
37+
[Guides](https://github.com/apache/datafusion/tree/main/docs) |
3838
[API Docs](https://docs.rs/datafusion/latest/datafusion/) |
3939
[Chat](https://discord.com/channels/885562378132000778/885562378132000781)
4040

4141
<img src="./docs/source/_static/images/2x_bgwhite_original.png" width="512" alt="logo"/>
4242

4343
Apache DataFusion is a very fast, extensible query engine for building high-quality data-centric systems in
4444
[Rust](http://rustlang.org), using the [Apache Arrow](https://arrow.apache.org)
45-
in-memory format. [Python Bindings](https://github.com/apache/arrow-datafusion-python) are also available. DataFusion offers SQL and Dataframe APIs, excellent [performance](https://benchmark.clickhouse.com/), built-in support for CSV, Parquet, JSON, and Avro, extensive customization, and a great community.
45+
in-memory format. [Python Bindings](https://github.com/apache/datafusion-python) are also available. DataFusion offers SQL and Dataframe APIs, excellent [performance](https://benchmark.clickhouse.com/), built-in support for CSV, Parquet, JSON, and Avro, extensive customization, and a great community.
4646

4747
Here are links to some important information
4848

@@ -51,7 +51,7 @@ Here are links to some important information
5151
- [Rust Getting Started](https://arrow.apache.org/datafusion/user-guide/example-usage.html)
5252
- [Rust DataFrame API](https://arrow.apache.org/datafusion/user-guide/dataframe.html)
5353
- [Rust API docs](https://docs.rs/datafusion/latest/datafusion)
54-
- [Rust Examples](https://github.com/apache/arrow-datafusion/tree/master/datafusion-examples)
54+
- [Rust Examples](https://github.com/apache/datafusion/tree/master/datafusion-examples)
5555
- [Python DataFrame API](https://arrow.apache.org/datafusion-python/)
5656
- [Architecture](https://docs.rs/datafusion/latest/datafusion/index.html#architecture)
5757

@@ -102,4 +102,4 @@ each stable Rust version for 6 months after it is
102102
[released](https://github.com/rust-lang/rust/blob/master/RELEASES.md). This
103103
generally translates to support for the most recent 3 to 4 stable Rust versions.
104104

105-
We enforce this policy using a [MSRV CI Check](https://github.com/search?q=repo%3Aapache%2Farrow-datafusion+rust-version+language%3ATOML+path%3A%2F%5ECargo.toml%2F&type=code)
105+
We enforce this policy using a [MSRV CI Check](https://github.com/search?q=repo%3Aapache%2Fdatafusion+rust-version+language%3ATOML+path%3A%2F%5ECargo.toml%2F&type=code)

benchmarks/src/bin/tpch.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ enum TpchOpt {
4747
/// use `dbbench` instead.
4848
///
4949
/// Note: this is kept to be backwards compatible with the benchmark names prior to
50-
/// <https://github.com/apache/arrow-datafusion/issues/6994>
50+
/// <https://github.com/apache/datafusion/issues/6994>
5151
#[tokio::main]
5252
async fn main() -> Result<()> {
5353
env_logger::init();

clippy.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
disallowed-methods = [
2-
{ path = "tokio::task::spawn", reason = "To provide cancel-safety, use `SpawnedTask::spawn` instead (https://github.com/apache/arrow-datafusion/issues/6513)" },
3-
{ path = "tokio::task::spawn_blocking", reason = "To provide cancel-safety, use `SpawnedTask::spawn_blocking` instead (https://github.com/apache/arrow-datafusion/issues/6513)" },
2+
{ path = "tokio::task::spawn", reason = "To provide cancel-safety, use `SpawnedTask::spawn` instead (https://github.com/apache/datafusion/issues/6513)" },
3+
{ path = "tokio::task::spawn_blocking", reason = "To provide cancel-safety, use `SpawnedTask::spawn_blocking` instead (https://github.com/apache/datafusion/issues/6513)" },
44
]
55

66
disallowed-types = [

datafusion-cli/Cargo.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -23,8 +23,8 @@ authors = ["Apache Arrow <[email protected]>"]
2323
edition = "2021"
2424
keywords = ["arrow", "datafusion", "query", "sql"]
2525
license = "Apache-2.0"
26-
homepage = "https://github.com/apache/arrow-datafusion"
27-
repository = "https://github.com/apache/arrow-datafusion"
26+
homepage = "https://github.com/apache/datafusion"
27+
repository = "https://github.com/apache/datafusion"
2828
# Specify MSRV here as `cargo msrv` doesn't support workspace version
2929
rust-version = "1.73"
3030
readme = "README.md"

datafusion-cli/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,4 +43,4 @@ checked in `Cargo.lock` file to ensure reproducible builds.
4343
However, the `datafusion` and sub crates are intended for use as libraries and
4444
thus do not have a `Cargo.lock` file checked in.
4545

46-
[`datafusion cargo.toml`]: https://github.com/apache/arrow-datafusion/blob/main/Cargo.toml
46+
[`datafusion cargo.toml`]: https://github.com/apache/datafusion/blob/main/Cargo.toml

datafusion-examples/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Run `git submodule update --init` to init test files.
3030
To run the examples, use the `cargo run` command, such as:
3131

3232
```bash
33-
git clone https://github.com/apache/arrow-datafusion
33+
git clone https://github.com/apache/datafusion
3434
cd arrow-datafusion
3535
# Download test data
3636
git submodule update --init

datafusion/core/benches/sql_planner.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -93,13 +93,13 @@ fn criterion_benchmark(c: &mut Criterion) {
9393
let ctx = create_context();
9494

9595
// Test simplest
96-
// https://github.com/apache/arrow-datafusion/issues/5157
96+
// https://github.com/apache/datafusion/issues/5157
9797
c.bench_function("logical_select_one_from_700", |b| {
9898
b.iter(|| logical_plan(&ctx, "SELECT c1 FROM t700"))
9999
});
100100

101101
// Test simplest
102-
// https://github.com/apache/arrow-datafusion/issues/5157
102+
// https://github.com/apache/datafusion/issues/5157
103103
c.bench_function("physical_select_one_from_700", |b| {
104104
b.iter(|| physical_plan(&ctx, "SELECT c1 FROM t700"))
105105
});

datafusion/core/src/catalog/mod.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -176,8 +176,8 @@ impl CatalogProviderList for MemoryCatalogProviderList {
176176
/// read from Delta Lake tables
177177
///
178178
/// [`datafusion-cli`]: https://arrow.apache.org/datafusion/user-guide/cli.html
179-
/// [`DynamicFileCatalogProvider`]: https://github.com/apache/arrow-datafusion/blob/31b9b48b08592b7d293f46e75707aad7dadd7cbc/datafusion-cli/src/catalog.rs#L75
180-
/// [`catalog.rs`]: https://github.com/apache/arrow-datafusion/blob/main/datafusion-examples/examples/catalog.rs
179+
/// [`DynamicFileCatalogProvider`]: https://github.com/apache/datafusion/blob/31b9b48b08592b7d293f46e75707aad7dadd7cbc/datafusion-cli/src/catalog.rs#L75
180+
/// [`catalog.rs`]: https://github.com/apache/datafusion/blob/main/datafusion-examples/examples/catalog.rs
181181
/// [delta-rs]: https://github.com/delta-io/delta-rs
182182
/// [`UnityCatalogProvider`]: https://github.com/delta-io/delta-rs/blob/951436ecec476ce65b5ed3b58b50fb0846ca7b91/crates/deltalake-core/src/data_catalog/unity/datafusion.rs#L111-L123
183183
///

datafusion/core/src/dataframe/mod.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2423,7 +2423,7 @@ mod tests {
24232423
Ok(())
24242424
}
24252425

2426-
// Test issue: https://github.com/apache/arrow-datafusion/issues/7790
2426+
// Test issue: https://github.com/apache/datafusion/issues/7790
24272427
// The join operation outputs two identical column names, but they belong to different relations.
24282428
#[tokio::test]
24292429
async fn with_column_join_same_columns() -> Result<()> {
@@ -2503,7 +2503,7 @@ mod tests {
25032503
}
25042504

25052505
// Table 't1' self join
2506-
// Supplementary test of issue: https://github.com/apache/arrow-datafusion/issues/7790
2506+
// Supplementary test of issue: https://github.com/apache/datafusion/issues/7790
25072507
#[tokio::test]
25082508
async fn with_column_self_join() -> Result<()> {
25092509
let df = test_table().await?.select_columns(&["c1"])?;

datafusion/core/src/datasource/cte_worktable.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ use crate::execution::context::SessionState;
3838
/// See here for more details: www.postgresql.org/docs/11/queries-with.html#id-1.5.6.12.5.4
3939
pub struct CteWorkTable {
4040
/// The name of the CTE work table
41-
// WIP, see https://github.com/apache/arrow-datafusion/issues/462
41+
// WIP, see https://github.com/apache/datafusion/issues/462
4242
#[allow(dead_code)]
4343
name: String,
4444
/// This schema must be shared across both the static and recursive terms of a recursive query

datafusion/core/src/datasource/file_format/parquet.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -212,7 +212,7 @@ impl FileFormat for ParquetFormat {
212212
// object stores (like local file systems) the order returned from list
213213
// is not deterministic. Thus, to ensure deterministic schema inference
214214
// sort the files first.
215-
// https://github.com/apache/arrow-datafusion/pull/6629
215+
// https://github.com/apache/datafusion/pull/6629
216216
schemas.sort_by(|(location1, _), (location2, _)| location1.cmp(location2));
217217

218218
let schemas = schemas
@@ -1040,7 +1040,7 @@ pub(crate) mod test_util {
10401040
multi_page: bool,
10411041
) -> Result<(Vec<ObjectMeta>, Vec<NamedTempFile>)> {
10421042
// we need the tmp files to be sorted as some tests rely on the how the returning files are ordered
1043-
// https://github.com/apache/arrow-datafusion/pull/6629
1043+
// https://github.com/apache/datafusion/pull/6629
10441044
let tmp_files = {
10451045
let mut tmp_files: Vec<_> = (0..batches.len())
10461046
.map(|_| NamedTempFile::new().expect("creating temp file"))

datafusion/core/src/datasource/file_format/write/demux.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ type DemuxedStreamReceiver = UnboundedReceiver<(Path, RecordBatchReceiver)>;
5757
/// the demux task for errors and abort accordingly. The single_file_ouput parameter
5858
/// overrides all other settings to force only a single file to be written.
5959
/// partition_by parameter will additionally split the input based on the unique
60-
/// values of a specific column `<https://github.com/apache/arrow-datafusion/issues/7744>``
60+
/// values of a specific column `<https://github.com/apache/datafusion/issues/7744>``
6161
/// ┌───────────┐ ┌────────────┐ ┌─────────────┐
6262
/// ┌──────▶ │ batch 1 ├────▶...──────▶│ Batch a │ │ Output File1│
6363
/// │ └───────────┘ └────────────┘ └─────────────┘

datafusion/core/src/datasource/listing/table.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -244,7 +244,7 @@ pub struct ListingOptions {
244244
/// the future be automatically determined, for example using
245245
/// parquet metadata.
246246
///
247-
/// See <https://github.com/apache/arrow-datafusion/issues/4177>
247+
/// See <https://github.com/apache/datafusion/issues/4177>
248248
/// NOTE: This attribute stores all equivalent orderings (the outer `Vec`)
249249
/// where each ordering consists of an individual lexicographic
250250
/// ordering (encapsulated by a `Vec<Expr>`). If there aren't

datafusion/core/src/datasource/listing/url.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -457,7 +457,7 @@ mod tests {
457457
test("/a/b*.txt", Some(("/a/", "b*.txt")));
458458
test("/a/b/**/c*.txt", Some(("/a/b/", "**/c*.txt")));
459459

460-
// https://github.com/apache/arrow-datafusion/issues/2465
460+
// https://github.com/apache/datafusion/issues/2465
461461
test(
462462
"/a/b/c//alltypes_plain*.parquet",
463463
Some(("/a/b/c//", "alltypes_plain*.parquet")),

datafusion/core/src/datasource/physical_plan/csv.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -769,7 +769,7 @@ mod tests {
769769
assert_eq!(14, csv.base_config.file_schema.fields().len());
770770
assert_eq!(14, csv.schema().fields().len());
771771

772-
// errors due to https://github.com/apache/arrow-datafusion/issues/4918
772+
// errors due to https://github.com/apache/datafusion/issues/4918
773773
let mut it = csv.execute(0, task_ctx)?;
774774
let err = it.next().await.unwrap().unwrap_err().strip_backtrace();
775775
assert_eq!(

datafusion/core/src/datasource/physical_plan/parquet/row_groups.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ use super::ParquetFileMetrics;
4949
/// did not filter out that row group.
5050
///
5151
/// Note: This method currently ignores ColumnOrder
52-
/// <https://github.com/apache/arrow-datafusion/issues/8335>
52+
/// <https://github.com/apache/datafusion/issues/8335>
5353
pub(crate) fn prune_row_groups_by_statistics(
5454
arrow_schema: &Schema,
5555
parquet_schema: &SchemaDescriptor,
@@ -63,7 +63,7 @@ pub(crate) fn prune_row_groups_by_statistics(
6363
if let Some(range) = &range {
6464
// figure out where the first dictionary page (or first data page are)
6565
// note don't use the location of metadata
66-
// <https://github.com/apache/arrow-datafusion/issues/5995>
66+
// <https://github.com/apache/datafusion/issues/5995>
6767
let col = metadata.column(0);
6868
let offset = col
6969
.dictionary_page_offset()

datafusion/core/src/datasource/physical_plan/parquet/statistics.rs

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -360,7 +360,7 @@ mod test {
360360
#[should_panic(
361361
expected = "Inconsistent types in ScalarValue::iter_to_array. Expected Int64, got TimestampNanosecond(NULL, None)"
362362
)]
363-
// Due to https://github.com/apache/arrow-datafusion/issues/8295
363+
// Due to https://github.com/apache/datafusion/issues/8295
364364
fn roundtrip_timestamp() {
365365
Test {
366366
input: timestamp_array([
@@ -470,7 +470,7 @@ mod test {
470470
(None, None),
471471
]),
472472
};
473-
// Due to https://github.com/apache/arrow-datafusion/issues/8334,
473+
// Due to https://github.com/apache/datafusion/issues/8334,
474474
// statistics for struct arrays are not supported
475475
test.expected_min =
476476
new_null_array(test.input.data_type(), test.expected_min.len());
@@ -483,7 +483,7 @@ mod test {
483483
#[should_panic(
484484
expected = "Inconsistent types in ScalarValue::iter_to_array. Expected Utf8, got Binary(NULL)"
485485
)]
486-
// Due to https://github.com/apache/arrow-datafusion/issues/8295
486+
// Due to https://github.com/apache/datafusion/issues/8295
487487
fn roundtrip_binary() {
488488
Test {
489489
input: Arc::new(BinaryArray::from_opt_vec(vec![

datafusion/core/src/datasource/view.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ mod tests {
158158

159159
#[tokio::test]
160160
async fn issue_3242() -> Result<()> {
161-
// regression test for https://github.com/apache/arrow-datafusion/pull/3242
161+
// regression test for https://github.com/apache/datafusion/pull/3242
162162
let session_ctx = SessionContext::new_with_config(
163163
SessionConfig::new().with_information_schema(true),
164164
);

datafusion/core/src/execution/context/avro.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ mod tests {
6565
use async_trait::async_trait;
6666

6767
// Test for compilation error when calling read_* functions from an #[async_trait] function.
68-
// See https://github.com/apache/arrow-datafusion/issues/1154
68+
// See https://github.com/apache/datafusion/issues/1154
6969
#[async_trait]
7070
trait CallReadTrait {
7171
async fn call_read_avro(&self) -> DataFrame;

datafusion/core/src/execution/context/csv.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ mod tests {
127127
}
128128

129129
// Test for compilation error when calling read_* functions from an #[async_trait] function.
130-
// See https://github.com/apache/arrow-datafusion/issues/1154
130+
// See https://github.com/apache/datafusion/issues/1154
131131
#[async_trait]
132132
trait CallReadTrait {
133133
async fn call_read_csv(&self) -> DataFrame;

datafusion/core/src/execution/context/parquet.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -333,7 +333,7 @@ mod tests {
333333
}
334334

335335
// Test for compilation error when calling read_* functions from an #[async_trait] function.
336-
// See https://github.com/apache/arrow-datafusion/issues/1154
336+
// See https://github.com/apache/datafusion/issues/1154
337337
#[async_trait]
338338
trait CallReadTrait {
339339
async fn call_read_parquet(&self) -> DataFrame;

datafusion/core/src/lib.rs

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@
128128
//!
129129
//! There are many additional annotated examples of using DataFusion in the [datafusion-examples] directory.
130130
//!
131-
//! [datafusion-examples]: https://github.com/apache/arrow-datafusion/tree/main/datafusion-examples
131+
//! [datafusion-examples]: https://github.com/apache/datafusion/tree/main/datafusion-examples
132132
//!
133133
//! ## Customization and Extension
134134
//!
@@ -170,7 +170,7 @@
170170
//! You can find a formal description of DataFusion's architecture in our
171171
//! [SIGMOD 2024 Paper].
172172
//!
173-
//! [SIGMOD 2024 Paper]: https://github.com/apache/arrow-datafusion/files/14789704/DataFusion_Query_Engine___SIGMOD_2024-FINAL.pdf
173+
//! [SIGMOD 2024 Paper]: https://github.com/apache/datafusion/files/14789704/DataFusion_Query_Engine___SIGMOD_2024-FINAL.pdf
174174
//!
175175
//! ## Overview Presentations
176176
//!
@@ -306,7 +306,7 @@
306306
//! [`TreeNode`]: datafusion_common::tree_node::TreeNode
307307
//! [`tree_node module`]: datafusion_expr::logical_plan::tree_node
308308
//! [`ExprSimplifier`]: crate::optimizer::simplify_expressions::ExprSimplifier
309-
//! [`expr_api`.rs]: https://github.com/apache/arrow-datafusion/blob/main/datafusion-examples/examples/expr_api.rs
309+
//! [`expr_api`.rs]: https://github.com/apache/datafusion/blob/main/datafusion-examples/examples/expr_api.rs
310310
//!
311311
//! ### Physical Plans
312312
//!
@@ -379,7 +379,7 @@
379379
//! [`RepartitionExec`]: https://docs.rs/datafusion/latest/datafusion/physical_plan/repartition/struct.RepartitionExec.html
380380
//! [Volcano style]: https://w6113.github.io/files/papers/volcanoparallelism-89.pdf
381381
//! [Morsel-Driven Parallelism]: https://db.in.tum.de/~leis/papers/morsels.pdf
382-
//! [DataFusion paper submitted SIGMOD]: https://github.com/apache/arrow-datafusion/files/13874720/DataFusion_Query_Engine___SIGMOD_2024.pdf
382+
//! [DataFusion paper submitted SIGMOD]: https://github.com/apache/datafusion/files/13874720/DataFusion_Query_Engine___SIGMOD_2024.pdf
383383
//! [implementors of `ExecutionPlan`]: https://docs.rs/datafusion/latest/datafusion/physical_plan/trait.ExecutionPlan.html#implementors
384384
//!
385385
//! ## Thread Scheduling
@@ -488,7 +488,7 @@ pub use parquet;
488488

489489
// re-export DataFusion sub-crates at the top level. Use `pub use *`
490490
// so that the contents of the subcrates appears in rustdocs
491-
// for details, see https://github.com/apache/arrow-datafusion/issues/6648
491+
// for details, see https://github.com/apache/datafusion/issues/6648
492492

493493
/// re-export of [`datafusion_common`] crate
494494
pub mod common {

datafusion/core/src/physical_optimizer/coalesce_batches.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ impl PhysicalOptimizerRule for CoalesceBatches {
5959
// The goal here is to detect operators that could produce small batches and only
6060
// wrap those ones with a CoalesceBatchesExec operator. An alternate approach here
6161
// would be to build the coalescing logic directly into the operators
62-
// See https://github.com/apache/arrow-datafusion/issues/139
62+
// See https://github.com/apache/datafusion/issues/139
6363
let wrap_in_coalesce = plan_any.downcast_ref::<FilterExec>().is_some()
6464
|| plan_any.downcast_ref::<HashJoinExec>().is_some()
6565
// Don't need to add CoalesceBatchesExec after a round robin RepartitionExec

0 commit comments

Comments
 (0)