This document provides guidance on how to manually test the LeetCode Helper extension during development. 🚀
Before testing, ensure that:
- You have a valid Gemini API key from Google AI Studio.
- You have installed the extension in Chrome developer mode (
chrome://extensions/-> Load unpacked). - You have configured and saved a valid API key via the extension's popup. The popup status should be green, and the toolbar icon should be enabled.
(These steps verify the basic setup)
- Install/Reload: Load or reload the extension via
chrome://extensions/. Check for manifest errors. - Open Popup: Click the extension icon in the toolbar.
- Save Invalid Key: Enter an obviously invalid key (e.g., "test") and click Save. Verify an error message appears, status stays red/disconnected, and the icon remains disabled.
- Save Valid Key: Enter your real Gemini API key and click Save.
- Verify the "Validating..." message appears.
- Verify the status indicator turns green, the text updates to "Gemini API configured...", and the toolbar icon changes to the green enabled state.
- Re-open Popup: Close and reopen the popup. Verify the configured state persists.
- Check Connection: Click the "Check Connection" button. Verify it confirms the configured status.
(These steps test the original functionality without test case analysis)
- Navigate: Go to any LeetCode problem (e.g.,
/problems/two-sum/). Verify the overlay appears. - Empty Editor: Click " Get Hint" with an empty editor. Verify a reasonable response (e.g., hints on how to start).
- Partial Code: Write some incomplete code (e.g., just a function signature). Click "Get Hint". Verify hints guide you on the next steps. Check console (F12) for code extraction logs.
- Plausible Code: Write code that attempts the solution (correct or incorrect). Click "Get Hint".
- Verify the loading spinner appears.
- Verify hints, bugs, and optimizations are displayed within a reasonable time (e.g., 3-10 seconds).
- Verify the content seems relevant to the code provided.
- Test collapsing/expanding the 💡, 🐛, ⚡ sections.
- Minimize/Maximize: Test the toggle button (
-/+) on the overlay header while idle and while loading a hint.
(These steps test the new feature integrating test execution)
- Navigate: Go to a LeetCode problem.
- Empty Editor: Click " Hint (Auto-Test)".
- It should still attempt to run the code (which will likely fail immediately on LeetCode).
- Check the console (F12) for logs from
testcase.jsattempting to run and parse results. - Verify the hint eventually displayed acknowledges the code is empty or likely failed execution.
- Code with Runtime Error: Write code that will cause a runtime error (e.g.,
a = 1 / 0, accessingnull.property). Click "Hint (Auto-Test)".- Verify the loading text updates (e.g., "Running tests...", "Getting hint...").
- Check console logs for
testcase.jsdetecting the error state and extracting error details/last input. - Verify the final hint explicitly mentions the runtime error found in the
🐛 Bugs & Failing Testssection, potentially referencing theerrorDetailsfrom the parsed results.
- Code Failing Some Test Cases: Write code that is logically incorrect for some inputs (e.g., Two Sum solution that doesn't handle duplicates correctly if the test cases include them). Click "Hint (Auto-Test)".
- Verify loading states and console logs showing test case parsing.
- Verify the final hint specifically mentions the failing test cases in the
🐛 Bugs & Failing Testssection, ideally referencing the Input, Output, and Expected values parsed bytestcase.js.
- Correct Code: Write a fully correct solution that passes all test cases. Click "Hint (Auto-Test)".
- Verify loading states and console logs showing test case parsing (all should have
match: true). - Verify the
🐛 Bugs & Failing Testssection states something like "No bugs detected based on the provided test results." - Verify the
💡 Hintsand⚡ Optimization Tipsfocus on explaining the solution, discussing complexity, or suggesting alternatives/refinements.
- Verify loading states and console logs showing test case parsing (all should have
- Time Limit Exceeded (Harder to test manually): If you have code that might TLE, run "Hint (Auto-Test)". Check if
testcase.jslogs "Time Limit Exceeded" as theconsoleOutputand if the Gemini hint mentions potential performance issues.
- Different Problems: Test on various problems (easy, medium, hard) with different input/output types (arrays, strings, numbers, lists, trees).
- Long Code/Descriptions: Test with problems having very long descriptions or if you paste very long code (approaching the truncation limits) to ensure sanitization works.
- Rapid Clicks: Click the hint buttons multiple times quickly. Ensure the UI handles loading states correctly and doesn't break.
- Network Interruption: Use DevTools (Network tab -> Offline) to simulate network loss while waiting for a hint. Verify a proper error message is shown.
- Invalid API Key (After Success): Configure a valid key, get a hint, then change the key in the popup to an invalid one, and try getting another hint. Verify it fails correctly.
- LeetCode UI Changes: Be aware that if LeetCode updates its site structure, the selectors in
content.js(for problem info) andtestcase.js(for run button, results panels, test cases) might break. Testing involves checking if these extractions still work after known LeetCode updates.
- API Key: See README troubleshooting. Check popup status and console for API errors (4xx, 5xx status codes).
- Overlay/UI: See README troubleshooting. Check console for JavaScript errors in
content.jsoroverlay.cssissues. - Code Extraction: Check console logs from
content.js. Didmonaco-extractor.jswork, or did it fall back to DOM extraction? Is the fallback finding the code? - Test Runner (
testcase.js):- Run Button Not Found: Check the
selectors.runButtonintestcase.jsagainst LeetCode's current HTML. - Timeout Waiting for Results: LeetCode might be slow, or the
selectors.consoleResultmight be wrong, or theMutationObserverlogic isn't detecting the change. Increase timeout intestcase.jsfor debugging. - Incorrect Parsing: Check console logs. Did it detect the correct state (Error vs. Cases)? Are the selectors for error messages or test case elements (
Input,Output,Expected, tabs) still valid? Add moreconsole.logstatements withintestcase.jsduring parsing loops to see what data is being found.
- Run Button Not Found: Check the
- Hint Content Issues: If hints seem irrelevant or badly formatted:
- Check the prompt being sent to Gemini (log it from
gemini-api.js). - Check the raw JSON response from Gemini (log it). Is Gemini following the format instructions and schema?
- Check the
formatTextWithCodeBlocksfunction incontent.js.
- Check the prompt being sent to Gemini (log it from
- Response Time:
- Time the " Get Hint" button (should be faster, mainly Gemini API time).
- Time the " Hint (Auto-Test)" button (will be longer, includes LeetCode run time + parsing time + Gemini API time). Note both times.
- UI Responsiveness: Ensure the overlay can be minimized/maximized and scrolled smoothly even while waiting for either type of hint.
- Console is King: Use
console.log,console.warn,console.errorliberally incontent.js,gemini-api.js, andtestcase.js. Check the correct console (the one for the LeetCode tab, not the popup or background). - Reload Extension: After code changes, always reload the extension from
chrome://extensions/(click the reload icon). Refresh the LeetCode page (Ctrl+RorCmd+R). - Inspect Elements: Use DevTools (Elements tab) to inspect the overlay and LeetCode page elements. Verify selectors used in the code are correct.
- Network Tab: Monitor requests to
generativelanguage.googleapis.com. Check request payload (is the prompt correct? Is the test data included?) and the response (status code, JSON body). - Debug
testcase.js: Adddebugger;statements insidetestcase.jsand use the DevTools Sources panel to step through the code execution, especially the parsing logic, while a LeetCode run is completing. - Isolate API Calls: Use tools like
curlor Postman to send the exact same prompt (logged fromgemini-api.js) directly to the Gemini API endpoint. This helps determine if issues lie in the extension's logic or the AI's response generation.