Skip to content

Conversation

adityachoudhari26
Copy link
Contributor

@adityachoudhari26 adityachoudhari26 commented Oct 14, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Automatically recomputes policy release targets when resources or deployments are added/removed, ensuring new targets are correctly associated with existing policies.
    • Refreshes policy mappings before evaluating change sets to improve consistency during reconciliation.
  • Tests

    • Added end-to-end test verifying release targets created after policy creation are detected and correctly matched to their policies.

Copy link
Contributor

coderabbitai bot commented Oct 14, 2025

Walkthrough

Adds a policy recomputation step to the event handling flow: when there are added or removed changes, all policies recompute their release targets before the change set is evaluated. Implements Policies.RecomputeAll and adds an end-to-end test verifying release targets are created and matched to policies.

Changes

Cohort / File(s) Summary of changes
Event handling flow
apps/workspace-engine/pkg/events/handler/handler.go
In ListenAndRoute, when there are added/removed changes, call ws.Policies().RecomputeAll(ctx) before evaluating the change set.
Policy store recomputation
apps/workspace-engine/pkg/workspace/store/policy.go
Adds Policies.RecomputeAll(ctx) which iterates policies, spawns goroutines to recompute release targets per policy using existing recompute logic, and updates releaseTargets map.
E2E validation for release targets
apps/workspace-engine/test/e2e/engine_policy_test.go
Adds TestEngine_ReleaseTargetCreatedAfterPolicy to create a policy and system state, assert two release targets exist, and verify each release target matches exactly one policy.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Src as Change Source
  participant EH as Event Handler
  participant WS as Workspace (Policies)
  participant CR as Change Evaluator

  Src->>EH: Emit change set (added/removed)
  alt any added/removed
    EH->>WS: Policies().RecomputeAll(ctx)
    activate WS
    WS->>WS: Iterate policies\n(recompute release targets in parallel)
    deactivate WS
  end
  EH->>CR: Evaluate change set
  CR-->>EH: Evaluation result
  EH-->>Src: Dispatch/complete
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

I thump my paw—recompute, renew!
Targets sprout where carrots grew.
Events arrive, we hop in line,
Policies pair each plot just fine.
Two targets found, two happy hops.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly identifies the core change by indicating a fix for the stale policy release target materialized view, directly reflecting the primary update in the code without extraneous detail. It follows a clear, conventional commit style prefix and is immediately understandable to team members scanning the history. The phrasing is precise, concise, and accurately tied to the modifications described in the PR summary.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch stale-policy-release-target-view

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
apps/workspace-engine/test/e2e/engine_policy_test.go (1)

529-609: Consider extracting common logic to reduce test duplication.

The test logic is nearly identical to TestEngine_PolicyMultipleSelectors (lines 447-527), with only the order of entity creation differing (policy first vs. system first). While both tests serve distinct purposes—validating policy matching when release targets are created before and after policy creation—approximately 80 lines of setup and assertion code are duplicated.

Consider extracting the common test logic into a helper function:

func testPolicyMultipleSelectorsWithOrder(t *testing.T, policyFirst bool) {
	d1ID := "deployment-prod"
	d2ID := "deployment-staging"
	e1ID := "env-1"
	r1ID := "resource-1"

	var opts []integration.WorkspaceOption
	
	policyOpt := integration.WithPolicy(
		integration.PolicyName("policy-prod-or-staging"),
		integration.WithPolicyTargetSelector(
			integration.PolicyTargetJsonDeploymentSelector(map[string]any{
				"type":     "name",
				"operator": "contains",
				"value":    "prod",
			}),
		),
		integration.WithPolicyTargetSelector(
			integration.PolicyTargetJsonDeploymentSelector(map[string]any{
				"type":     "name",
				"operator": "contains",
				"value":    "staging",
			}),
		),
	)
	
	systemOpt := integration.WithSystem(
		integration.WithDeployment(
			integration.DeploymentID(d1ID),
			integration.DeploymentName("deployment-prod"),
		),
		integration.WithDeployment(
			integration.DeploymentID(d2ID),
			integration.DeploymentName("deployment-staging"),
		),
		integration.WithEnvironment(
			integration.EnvironmentID(e1ID),
			integration.EnvironmentJsonResourceSelector(map[string]any{
				"type":     "name",
				"operator": "starts-with",
				"value":    "",
			}),
		),
	)
	
	resourceOpt := integration.WithResource(
		integration.ResourceID(r1ID),
	)
	
	if policyFirst {
		opts = []integration.WorkspaceOption{policyOpt, systemOpt, resourceOpt}
	} else {
		opts = []integration.WorkspaceOption{systemOpt, resourceOpt, policyOpt}
	}
	
	engine := integration.NewTestWorkspace(t, opts...)
	
	// Common assertions
	ctx := context.Background()
	releaseTargets, err := engine.Workspace().ReleaseTargets().Items(ctx)
	if err != nil {
		t.Fatalf("failed to get release targets")
	}
	if len(releaseTargets) != 2 {
		t.Fatalf("expected 2 release targets, got %d", len(releaseTargets))
	}

	rtProd := &oapi.ReleaseTarget{
		DeploymentId:  d1ID,
		EnvironmentId: e1ID,
		ResourceId:    r1ID,
	}
	rtStaging := &oapi.ReleaseTarget{
		DeploymentId:  d2ID,
		EnvironmentId: e1ID,
		ResourceId:    r1ID,
	}

	policiesProd := engine.Workspace().Policies().GetPoliciesForReleaseTarget(ctx, rtProd)
	if len(policiesProd) != 1 {
		t.Fatalf("expected policy to match prod release target, got %d policies", len(policiesProd))
	}

	policiesStaging := engine.Workspace().Policies().GetPoliciesForReleaseTarget(ctx, rtStaging)
	if len(policiesStaging) != 1 {
		t.Fatalf("expected policy to match staging release target, got %d policies", len(policiesStaging))
	}
}

func TestEngine_PolicyMultipleSelectors(t *testing.T) {
	testPolicyMultipleSelectorsWithOrder(t, false)
}

func TestEngine_ReleaseTargetCreatedAfterPolicy(t *testing.T) {
	testPolicyMultipleSelectorsWithOrder(t, true)
}
apps/workspace-engine/pkg/events/handler/handler.go (1)

179-181: Consider performance implications of full policy recomputation.

The logic correctly ensures policy release-target mappings stay fresh by recomputing all policies whenever release targets change. However, this triggers a full recomputation of all policies on every release target change, which could become expensive as the number of policies grows.

For high-scale deployments with many policies, consider:

  1. Selective recomputation: Track which policies might be affected by the changed release targets (based on their selectors) and recompute only those policies.
  2. Batching: If multiple release target changes occur in quick succession, batch the recomputations.
  3. Monitoring: Add metrics to track recomputation time and frequency to identify when optimization becomes necessary.

The current approach is correct and simpler to maintain, so optimization may not be needed unless performance issues arise in production.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 700b63b and 10e3038.

📒 Files selected for processing (3)
  • apps/workspace-engine/pkg/events/handler/handler.go (1 hunks)
  • apps/workspace-engine/pkg/workspace/store/policy.go (1 hunks)
  • apps/workspace-engine/test/e2e/engine_policy_test.go (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
apps/workspace-engine/**/*.go

📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)

apps/workspace-engine/**/*.go: Do not add extraneous inline comments that state the obvious
Do not add comments that simply restate what the code does
Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods

Files:

  • apps/workspace-engine/test/e2e/engine_policy_test.go
  • apps/workspace-engine/pkg/workspace/store/policy.go
  • apps/workspace-engine/pkg/events/handler/handler.go
apps/workspace-engine/**/*_test.go

📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)

Follow the existing test structure used in *_test.go files

Files:

  • apps/workspace-engine/test/e2e/engine_policy_test.go
🧬 Code graph analysis (3)
apps/workspace-engine/test/e2e/engine_policy_test.go (6)
apps/workspace-engine/pkg/workspace/workspace.go (2)
  • NewTestWorkspace (27-38)
  • Workspace (40-46)
apps/workspace-engine/test/integration/workspace.go (1)
  • NewTestWorkspace (25-49)
apps/workspace-engine/test/integration/opts.go (13)
  • WithPolicy (152-169)
  • PolicyName (539-543)
  • WithPolicyTargetSelector (557-567)
  • PolicyTargetJsonDeploymentSelector (577-583)
  • WithSystem (84-105)
  • WithDeployment (259-277)
  • DeploymentID (310-314)
  • DeploymentName (297-302)
  • WithEnvironment (279-293)
  • EnvironmentID (427-431)
  • EnvironmentJsonResourceSelector (433-439)
  • WithResource (107-132)
  • ResourceID (455-459)
apps/workspace-engine/pkg/workspace/store/release_targets.go (1)
  • ReleaseTargets (22-26)
apps/workspace-engine/pkg/oapi/oapi.gen.go (1)
  • ReleaseTarget (278-282)
apps/workspace-engine/pkg/workspace/store/policy.go (1)
  • Policies (22-26)
apps/workspace-engine/pkg/workspace/store/policy.go (3)
apps/workspace-engine/pkg/workspace/store/repository/repo.go (1)
  • New (60-78)
apps/workspace-engine/pkg/workspace/store/materialized/view.go (1)
  • New (34-48)
apps/workspace-engine/pkg/workspace/store/store.go (1)
  • New (12-36)
apps/workspace-engine/pkg/events/handler/handler.go (2)
apps/workspace-engine/pkg/workspace/releasemanager/manager.go (1)
  • Changes (48-52)
apps/workspace-engine/pkg/workspace/store/policy.go (1)
  • Policies (22-26)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
apps/workspace-engine/pkg/workspace/store/policy.go (1)

162-167: LGTM! Public method enables full refresh of policy mappings.

The method correctly iterates over all policies and recreates their materialized views, enabling a complete refresh of release-target mappings when the underlying data changes. The implementation follows the same pattern as Upsert (line 126).

Note: The ctx parameter is currently unused in the implementation. While this is acceptable for interface consistency and future-proofing, be aware that the actual recomputation (triggered lazily by the materialized view) won't respect any context deadlines or cancellation.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 10e3038 and 0eac5fc.

📒 Files selected for processing (1)
  • apps/workspace-engine/pkg/workspace/store/policy.go (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
apps/workspace-engine/**/*.go

📄 CodeRabbit inference engine (apps/workspace-engine/CLAUDE.md)

apps/workspace-engine/**/*.go: Do not add extraneous inline comments that state the obvious
Do not add comments that simply restate what the code does
Do not add comments for standard Go patterns (e.g., noting WaitGroup or semaphore usage)
Write comments that explain why, document complex logic/algorithms, provide non-obvious context, include TODO/FIXME, and document exported functions/types/methods

Files:

  • apps/workspace-engine/pkg/workspace/store/policy.go
🧬 Code graph analysis (1)
apps/workspace-engine/pkg/workspace/store/policy.go (4)
apps/workspace-engine/pkg/workspace/store/repository/repo.go (1)
  • New (60-78)
apps/workspace-engine/pkg/workspace/store/materialized/view.go (1)
  • New (34-48)
apps/workspace-engine/pkg/workspace/store/store.go (1)
  • New (12-36)
apps/workspace-engine/pkg/cmap/concurrent_map.go (1)
  • New (46-48)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Typecheck
🔇 Additional comments (1)
apps/workspace-engine/pkg/workspace/store/policy.go (1)

6-6: LGTM: Import addition is appropriate.

The sync package is correctly imported to support the concurrent recomputation logic in RecomputeAll.

Comment on lines +163 to +178
func (p *Policies) RecomputeAll(ctx context.Context) {
var wg sync.WaitGroup

for policyItem := range p.IterBuffered() {
wg.Add(1)
policyId := policyItem.Val.Id

go func() {
defer wg.Done()
// Create new materialized view which will start computing in background
p.releaseTargets.Set(policyId, materialized.New(p.recomputeReleaseTargets(policyId)))
}()
}

wg.Wait()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical goroutine closure bug: all goroutines will process the same policy ID.

The goroutine captures the loop variable policyId by reference. Since policyId is declared inside the loop body with :=, it's reused across iterations—all goroutines will capture the same variable and likely process the policy ID from the last iteration, causing incorrect behavior.

Additionally:

  • The ctx parameter is not used, so cancellation signals won't be respected
  • No error handling—failures are silently ignored
  • The inline comment restates the obvious (violates coding guidelines)

Apply this diff to fix the closure bug and use the context:

-func (p *Policies) RecomputeAll(ctx context.Context) {
+func (p *Policies) RecomputeAll(ctx context.Context) error {
 	var wg sync.WaitGroup
+	errChan := make(chan error, 1)
 
 	for policyItem := range p.IterBuffered() {
+		select {
+		case <-ctx.Done():
+			wg.Wait()
+			return ctx.Err()
+		default:
+		}
+
 		wg.Add(1)
-		policyId := policyItem.Val.Id
+		policyId := policyItem.Val.Id  // Capture for closure
 
-		go func() {
+		go func(id string) {
 			defer wg.Done()
-			// Create new materialized view which will start computing in background
-			p.releaseTargets.Set(policyId, materialized.New(p.recomputeReleaseTargets(policyId)))
-		}()
+			mv := materialized.New(p.recomputeReleaseTargets(id))
+			p.releaseTargets.Set(id, mv)
+		}(policyId)
 	}
 
 	wg.Wait()
+	select {
+	case err := <-errChan:
+		return err
+	default:
+		return nil
+	}
 }

Note: The above diff adds basic error handling infrastructure. However, materialized.New doesn't currently return errors. If error reporting is required, you may need to modify the materialized view package or add a validation step after recomputation completes.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
func (p *Policies) RecomputeAll(ctx context.Context) {
var wg sync.WaitGroup
for policyItem := range p.IterBuffered() {
wg.Add(1)
policyId := policyItem.Val.Id
go func() {
defer wg.Done()
// Create new materialized view which will start computing in background
p.releaseTargets.Set(policyId, materialized.New(p.recomputeReleaseTargets(policyId)))
}()
}
wg.Wait()
}
func (p *Policies) RecomputeAll(ctx context.Context) error {
var wg sync.WaitGroup
errChan := make(chan error, 1)
for policyItem := range p.IterBuffered() {
select {
case <-ctx.Done():
wg.Wait()
return ctx.Err()
default:
}
wg.Add(1)
policyId := policyItem.Val.Id // Capture for closure
go func(id string) {
defer wg.Done()
mv := materialized.New(p.recomputeReleaseTargets(id))
p.releaseTargets.Set(id, mv)
}(policyId)
}
wg.Wait()
select {
case err := <-errChan:
return err
default:
return nil
}
}
🤖 Prompt for AI Agents
In apps/workspace-engine/pkg/workspace/store/policy.go around lines 163-178, fix
the goroutine closure bug by ensuring each goroutine captures a copy of the loop
variable (e.g., assign policyId := policyItem.Val.Id into a new local variable
inside the loop or pass it as a parameter to the anonymous func), use the
provided ctx by passing it into the recomputation or checking ctx.Done() to stop
work on cancellation, add minimal error handling (log errors or record them in a
slice/metric and avoid silent failures), and remove the redundant inline
comment; ensure materialized.New is created with the per-iteration policy id and
any validation or error reporting is performed after recomputation completes (or
adapt materialized.New to return errors if needed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant