-
Notifications
You must be signed in to change notification settings - Fork 99
Optimize neotest in coverage mode #4101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #4101 +/- ##
==========================================
- Coverage 83.51% 83.49% -0.03%
==========================================
Files 351 351
Lines 42390 42439 +49
==========================================
+ Hits 35401 35433 +32
- Misses 5251 5274 +23
+ Partials 1738 1732 -6 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
280a286 to
adf6ec9
Compare
| rawCoverage[h] = fullCov | ||
| } | ||
|
|
||
| fullCov.offsetsVisited = append(fullCov.offsetsVisited, cov.offsetsVisited...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can break with t.Parallel(), but maybe coverageLock is exactly what we need for the loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
t.Parallel() will break (*Executor).EnableCoverage and (*Executor).DisableCoverage as far since e.collectCoverage is not atomic and not supposed to be shared between multiple parallel tests running for a single instance of Executor:
Lines 454 to 462 in 93fe450
| // EnableCoverage enables coverage collection for this executor, but only when `go test` is running with coverage enabled. | |
| func (e *Executor) EnableCoverage(t testing.TB) { | |
| e.collectCoverage = isCoverageEnabled(t) | |
| } | |
| // DisableCoverage disables coverage collection for this executor until enabled explicitly through EnableCoverage. | |
| func (e *Executor) DisableCoverage() { | |
| e.collectCoverage = false | |
| } |
So right now it looks like we don't fully support parallel coverage collection. Do we need an issue for that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, would be nice to have one since it's not obvious. Should be mentioned in neotest docs then.
| coverageLock.Lock() | ||
| defer coverageLock.Unlock() | ||
|
|
||
| e.coverageLock.RLock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this one probably doesn't require any locking since it's executed as the test cleanup function, Executor should be done by that time.
pkg/neotest/basic.go
Outdated
| collectCoverage bool | ||
| collectCoverage bool | ||
| t testing.TB | ||
| reportCoverageScheduled atomic.Bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/reportCoverageScheduled/coverageReportScheduled
Or, which looks better to me, just use once sync.Once, ref. #4101 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it was named so to ref particular reportCoverage func. Dont mind use once
pkg/neotest/basic.go
Outdated
| // collectCoverage is true if coverage is being collected when running this executor. | ||
| collectCoverage bool | ||
| collectCoverage bool | ||
| t testing.TB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't bind t to the Executor, it goes against of its design. Use callback instead:
@@ -34,12 +33,18 @@ type Executor struct {
Validator Signer
Committee Signer
CommitteeHash util.Uint160
+
// collectCoverage is true if coverage is being collected when running this executor.
- collectCoverage bool
- t testing.TB
- reportCoverageScheduled atomic.Bool
- coverageLock sync.RWMutex
- rawCoverage map[util.Uint160]*scriptRawCoverage
+ // It may be turned on and off within the lifetime of the Executor.
+ collectCoverage bool
+ // scheduleCoverageReport is a cleanup function that schedules the report of
+ // collected coverage data for this Executor. It's supposed to be called
+ // once for this instance of Executor.
+ scheduleCoverageReport func()
+ once sync.Once
+
+ coverageLock sync.RWMutex
+ rawCoverage map[util.Uint160]*scriptRawCoverage
}...and then set it if required:
@@ -49,15 +54,20 @@ func NewExecutor(t testing.TB, bc *core.Blockchain, validator, committee Signer)
checkMultiSigner(t, validator)
checkMultiSigner(t, committee)
- return &Executor{
+ e := &Executor{
Chain: bc,
Validator: validator,
Committee: committee,
CommitteeHash: committee.ScriptHash(),
collectCoverage: isCoverageEnabled(t),
- t: t,
rawCoverage: make(map[util.Uint160]*scriptRawCoverage),
}
+
+ if e.collectCoverage {
+ e.scheduleCoverageReport = func() { t.Cleanup(e.reportCoverage) }
+ }
+
+ return e
}...and then call it once per Executor:
@@ -186,11 +196,7 @@ func (e *Executor) DeployContractCheckFAULT(t testing.TB, c *Contract, data any,
func (e *Executor) trackCoverage(c *Contract) {
if e.collectCoverage {
addScriptToCoverage(c)
- if !e.reportCoverageScheduled.Swap(true) {
- e.t.Cleanup(func() {
- e.reportCoverage()
- })
- }
+ e.once.Do(e.scheduleCoverageReport)
}
}There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
closure accesing t is a binding too, except it's not explicit. OK, as u wish
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't I also init scheduleCoverageReport in EnableCoverage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
except it's not explicit
Yes, I meant an explicit binding. Executor's methods shouldn't have an explicit access to t bound to the executor.
shouldn't I also init scheduleCoverageReport in EnableCoverage
Yes.
| rawCoverage[h] = fullCov | ||
| } | ||
|
|
||
| fullCov.offsetsVisited = append(fullCov.offsetsVisited, cov.offsetsVisited...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
t.Parallel() will break (*Executor).EnableCoverage and (*Executor).DisableCoverage as far since e.collectCoverage is not atomic and not supposed to be shared between multiple parallel tests running for a single instance of Executor:
Lines 454 to 462 in 93fe450
| // EnableCoverage enables coverage collection for this executor, but only when `go test` is running with coverage enabled. | |
| func (e *Executor) EnableCoverage(t testing.TB) { | |
| e.collectCoverage = isCoverageEnabled(t) | |
| } | |
| // DisableCoverage disables coverage collection for this executor until enabled explicitly through EnableCoverage. | |
| func (e *Executor) DisableCoverage() { | |
| e.collectCoverage = false | |
| } |
So right now it looks like we don't fully support parallel coverage collection. Do we need an issue for that?
Previously, each `Executor` acquired global lock on each VM instruction. This led to test slowdowns as the number of Executor instances increased. Also, filling out the coverage file was scheduled by every `DeployContractBy()` / `DeployContractCheckFAULT()` call incl. sub-test calls. This introduces two changes to the process: 1. Each `Executor` schedules report once on cleanup of the test this instance is created for. 2. Each `Executor` collects coverage data within itself, then merges collected data into global space on report. For example, this reduced duration of current NeoFS contract tests was from ~12m to ~7s. Closes #3558. Signed-off-by: Leonard Lyubich <[email protected]>
adf6ec9 to
931a4eb
Compare
Previously, each
Executoracquired global lock on each VM instruction. This led to test slowdowns as the number of Executor instances increased. Also, filling out the coverage file was scheduled by everyDeployContractBy()/DeployContractCheckFAULT()call incl. sub-test calls.This introduces two changes to the process:
Executorschedules report once on cleanup of the test this instance is created for.Executorcollects coverage data within itself, then merges collected data into global space on report.For example, this reduced duration of current NeoFS contract tests was from ~12m to ~7s.
Closes #3558.