Skip to content

Fix unclear test case failure messages with detailed expected vs actual output comparison #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

Copilot
Copy link

@Copilot Copilot AI commented Jul 3, 2025

Problem

When test cases failed, users only saw generic messages like "test case failed" without any indication of what the expected output was versus what their code actually produced. This made debugging extremely difficult as users had no way to understand why their solution was incorrect.

Before this fix:

❌😭 Test Case 1: Failed! 😭 
Error: Test case 1 failed.
Actual Output: N/A

Solution

Enhanced the error handling to provide detailed expected vs actual output comparison when test cases fail.

After this fix:

❌😭 Test Case 1: Failed! 😭
❌ Test case 1 failed.
Expected Output: "[1,2,3]"
Actual Output: "[1,2,4]"

Changes Made

  1. Modified compareOutputs() function in executeCode.ts:

    • Instead of rejecting with a simple "Test case X failed" message
    • Now includes detailed comparison showing both expected and actual outputs
    • Maintains the same console logging while improving the error message passed to the UI
  2. Enhanced error handling in extension.ts:

    • Detects when error messages contain detailed comparison information
    • Displays the full expected vs actual comparison to users
    • Falls back to previous behavior for other error types (compilation, runtime errors)
  3. Updated documentation:

    • Removed the known issue about unclear failure messages from README
    • Added feature highlighting the improved error messaging

Benefits

  • Better debugging experience: Users can immediately see what their code produced vs what was expected
  • Faster problem resolution: No need to manually check output files or guess what went wrong
  • Backward compatible: Maintains existing behavior for non-comparison errors
  • Clear visual distinction: Easy to spot differences between expected and actual results

Testing

Created comprehensive tests demonstrating the improvement works correctly for various data types (arrays, strings, numbers) while maintaining compatibility with successful test cases.

Fixes #2.

Warning

Firewall rules blocked me from connecting to one or more addresses

I tried to connect to the following addresses, but was blocked by firewall rules:

  • googlechromelabs.github.io
    • Triggering command: node install.mjs (dns block)
  • https://storage.googleapis.com/chrome-for-testing-public/131.0.6778.204/linux64/chrome-headless-shell-linux64.zip
    • Triggering command: node install.mjs (http block)

If you need me to access, download, or install something from one of these locations, you can either:


💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.

@Copilot Copilot AI changed the title [WIP] if the test case fails there is not clear message to the user, it only shows that the test case is failed Fix unclear test case failure messages with detailed expected vs actual output comparison Jul 3, 2025
@Copilot Copilot AI requested a review from ashishexee July 3, 2025 18:54
Copilot finished work on behalf of ashishexee July 3, 2025 18:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

if the test case fails there is not clear message to the user, it only shows that the test case is failed
2 participants