-
Notifications
You must be signed in to change notification settings - Fork 1
Open
Description
To provide users with a quick, at-a-glance assessment of a startup's potential, this ticket involves creating a basic scoring algorithm. This system will analyze the final JSON report generated by the agents and calculate a numerical score based on predefined criteria, helping to standardize the initial evaluation process.
Implementation Tips
- Scoring Logic: Create a new Python function or class (e.g., in scans/scoring.py). This function will take the final report JSON object as input.
- Criteria & Weights: Define a simple set of rules and weights. For example:
- Founders: +10 points if any founder has a previous successful exit mentioned in the description.
- Industry: +5 points if the industry is a "hot" sector like AI or SaaS.
- Employee Size: +1 point for every 10 employees, up to a maximum of 20 points.
- Red Flags: -20 points if any negative sentiment or major risks were identified (this would be a future enhancement).
- Data Model: Add a new score field (e.g., IntegerField) to the ScanJob model to store the calculated score.
- Integration: In your perform_scan_task Celery task, after the final report JSON is successfully generated, call your new scoring function and pass the JSON to it. Save the returned score to the job.score field before saving the final ScanJob object.
Acceptance Criteria (Checklist)
- A Python function for calculating a score based on a report exists.
- The scoring logic includes at least 3-4 different criteria.
- The ScanJob model has a new field to store the calculated score.
- The Celery task calls the scoring function and saves the score to the database upon successful completion of a scan.
- The calculated score is displayed on the scan results page and the scan history page.
Unit Tests
- Test the Scoring Algorithm: This is the most important part. Create a dedicated test file (e.g., scans/tests/test_scoring.py).
- Create several mock JSON report dictionaries representing different scenarios (e.g., a great company, an average one, one with red flags).
- Write a separate test for each scenario, calling your scoring function with the mock data and asserting that it returns the exact score you expect. This validates your logic thoroughly.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels