Skip to content

Parallelism Revisions #83

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Apr 8, 2025
Merged

Parallelism Revisions #83

merged 19 commits into from
Apr 8, 2025

Conversation

jespiron
Copy link
Member

@jespiron jespiron commented Apr 2, 2025

Recover from #43

This is a rewrite of the first half, with the goal of making the second half easier to understand

Main change:

  • Introduces a real-world example, of circle rendering, to situate the concepts (inter-thread communication, data races)
  • Add motivating example for thread::scope

Smaller changes:

  • Explain atomics before second half's Arc
  • Added advice + considerations that I personally find useful
  • Added speaker notes for second half. Second half can benefit from revisions too, but we'll do that in a separate pass
    image

@jespiron jespiron mentioned this pull request Apr 2, 2025
@jespiron jespiron marked this pull request as ready for review April 3, 2025 19:53
@jespiron jespiron requested a review from Copilot April 3, 2025 19:53
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

@connortsui20
Copy link
Member

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

I never enabled Copilot so I have no idea why this is here

Copy link
Member

@connortsui20 connortsui20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this outline a lot! I think it makes the flow of the lecture quite nice. However, I think something that Ben did well to make up for that was that he explained a lot of stuff that was not actually on the slides. Additionally, in the past semesters we've had older students who were already familiar with this. So I think we can allocate a good amount of time to going through the why and what of parallelism before the how.

Comment on lines 92 to 102
* 8 cores, 8 threads
* 4 threads load the webpage, 4 threads update the progress bar

</div>
<div>

## Concurrency
## Alternate Concurrency Model

* 1 core, 1 thread
* When blocked on loading the webpage, update the progress bar

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these examples could be improved, for example Im sure there are other more intuitive ways to split up 8 workers in general.

Also remember that some people in the class have never even seen threads before. So I think try to stick to higher-level examples like multiple people working on one thing vs one person working on many things

Copy link
Member Author

@jespiron jespiron Apr 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Originally, I preserved the old lecture's example, but we can go higher-level because of the shifted demographics this semester.

The point of this slide is really just that parallelism => # of threads > 1

I've opted to go directly to the diagram slide, with a high-level explanation like "In parallelism, you have three people: one taking orders, one cooking, and one serving. In some concurrency models, you have one person alternating between these tasks..."

image

```
* `thread::spawn` takes a function as argument
* This is the function that the thread runs
* We can spawn multiple of these, to run multiple functions at once
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can spawn multiple threads (but we cannot spawn multiple of the same closure because they are FnOnce

Copy link
Member Author

@jespiron jespiron Apr 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very valuable point, I would save that for the second half when we talk about the Rust-specific implementation.
See "# Creating Threads, In More Detail" - this slide is a copy from old lecture, except I hid the Rust-specific details (FnOnce, Send, JoinHandle type)

Purpose of this slide is to give a gist of how threads are used, independent of language implementation

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I added it to second half

* We can spawn multiple of these, to run multiple functions at once


---
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I would like to see an example of threads here before you talk about (potentially) complex communication protocols.

Could you copy the example from the book? https://rust-book.cs.brown.edu/ch16-01-threads.html#creating-a-new-thread-with-spawn

This is a super simple example but again, remember that some people have never seen this before.

At the very least, I would like to see everything on that first section of that chapter in this lecture near the beginning, with the exception that you can somewhat handwave how join works.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same example as the book's, condensed to show only the thread::spawn call.

I see value in expanding it out like the book, especially the println outputs. Can definitely do

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, I expanded out this first half's example and elaborated on join in second half

@jespiron jespiron requested a review from connortsui20 April 5, 2025 22:02
@jespiron
Copy link
Member Author

jespiron commented Apr 7, 2025

!remind me

discuss thread::scope

@jespiron
Copy link
Member Author

jespiron commented Apr 7, 2025

Okay, I added an example for thread::scope.
It should help them with the RowLab homework. It also better contextualizes Arc+Mutex, by framing them under a shared problem

Copy link
Member

@connortsui20 connortsui20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice job! I think the flow of the lecture is super clear and the specific concepts are great for this lecture.

I've ignored some nits and things that need to be cleaned up since I'll just do those on Wednesday before lecture. The biggest thing that can be improved is examples of actually using Rust code for parallelism.

Comment on lines +126 to +145
# Example: Creating a Thread

We can create ("spawn") more threads with `thread::spawn`:

```rust
let handle = thread::spawn(|| {
for i in 1..10 {
println!("working on item {} from the spawned thread!", i);
thread::sleep(Duration::from_millis(1));
}
});

for i in 1..5 {
println!("working on item {i} from the main thread!");
thread::sleep(Duration::from_millis(1));
}
```

* `thread::spawn` takes a closure as argument
* This is the function that the thread runs
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to see a few examples of this running without joining, because it shows the non-determinism of multithreading.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually added it, then deleted here: b5f50cb

it was out of timing concerns, could one of you or @Fiona-CMU add it back where you see fit?

Comment on lines +167 to +172

handle.join().unwrap();
```

* Blocks the current thread until the thread associated with `handle` finishes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And by showing the example without joining you can motivate why we have joining in the first place

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it’s here b5f50cb

```

* When a thread touches a pixel, increment the pixel's associated `x`
* Now each thread knows how many layers of paint on that pixel has
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now each thread knows how many layers of paint there are on that pixel

First ingredient: `x` is in shared memory, and `x` must satisfy some property to be correct.

```c
// x is # of times *any* thread has called `update_x`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe include a forward declaration of update_x?

Also we could say:

// Invariant: `x` is the total number of times **any** thread has called `update_x`.

Comment on lines 302 to 311

# Shared Memory: Data Races

Suppose we have a shared variable `x`.

```c
static int x = 0;
```

* This is C pseudocode; we'll explain Rust's interface in second half
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this slide can be merged into the next one, the previous slides showed this code anyways

Comment on lines +572 to +573
* `fetch_and_add`: performs the operation suggested by the name, and returns the value that was previously in memory
* Also `fetch_and_sub`, `fetch_and_or`, `fetch_and_and`, ...
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should be Rust code slide examples

Comment on lines +579 to +585
Other common atomic is `compare_and_swap`
* If the current value matches some old value, then write new value into memory
* Depending on variant, returns a boolean for whether new value was written into memory
* "Lock-free" programming:
* No locks! Just `compare_and_swap` until we successfully write new value
* Not necessarily more performant than lock-based solutions
* Contention is bottleneck, not presence of locks
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that we should probably keep this simple and skip past this during lecture. Call this a "sneak peek of CAS" and leave this here for people who are interested to read later

Comment on lines +601 to +607
# Atomics

Rust provides atomic primitive types, like `AtomicBool`, `AtomicI8`, `AtomicIsize`, etc.
* Provides a way to access values atomically from any thread
* Safe to share between threads implementing `Sync`
* We won't cover it further in this course, but the API is largely 1:1 with the C++20 atomics
* If interested in pitfalls, read up on *memory ordering* in computer systems
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should come directly after the proposed Rust Atomics slide example

Comment on lines 646 to 647
* Approach 2: Message Passing
* Eliminates shared memory
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bold this

Comment on lines 935 to 937
* The Scope object `s` has a lifetime tied to the `thread::scope` call
* The closure *cannot* smuggle a reference to borrowed data outside this lifetime
* You *cannot* return thread handles (`t1`, `t2`) outside the scope
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This goes off the edge of the slide

@jespiron
Copy link
Member Author

jespiron commented Apr 8, 2025

Thank you Fiona for the additions! Merging this so we pick it up at #85

@jespiron jespiron merged commit bc753b4 into main Apr 8, 2025
@jespiron jespiron deleted the jess/parallelism-revisions branch April 8, 2025 12:26
@connortsui20 connortsui20 restored the jess/parallelism-revisions branch April 8, 2025 13:50
@connortsui20 connortsui20 deleted the jess/parallelism-revisions branch April 8, 2025 13:54
@connortsui20 connortsui20 mentioned this pull request Apr 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants