Skip to content
This repository has been archived by the owner on Mar 6, 2024. It is now read-only.

Use gpt-4-turbo-preview model #524

Closed
wants to merge 12 commits into from

Conversation

pascal-zarrad
Copy link

@pascal-zarrad pascal-zarrad commented Feb 28, 2024

Support gpt-4-turbo-preview as heavy model. This is cheaper than gpt-4 and provides a huge context.

Summary by CodeRabbit


  • New Feature: Added support for the gpt-4-turbo-preview model as a heavy model, replacing gpt-4. This allows users to utilize the latest AI model for their tasks.
  • Refactor: Improved code readability and maintainability by replacing the if-else chain in TokenLimits constructor with a switch-case.
  • Chore: Updated file exclusions and default values for model selection to align with the new changes.

Summary by CodeRabbit

  • New Features
    • Updated token limits for enhanced performance, including support for the 'gpt-4-turbo-preview' model with increased limits and an updated knowledge cutoff date.

Copy link

coderabbitaidev bot commented Feb 28, 2024

Warning

CodeRabbit GitHub Action Detected

The repository is using both CodeRabbit Pro and CodeRabbit Open Source (via GitHub Actions), which is not recommended as it may lead to duplicate comments and extra noise. Please remove the CodeRabbit GitHub Action.

Walkthrough

The recent update enhances the TokenLimits class by incorporating a switch statement for setting token limits based on the model type. A notable addition is the case for 'gpt-4-turbo-preview', which introduces higher token limits and a unique knowledgeCutOff date, reflecting an optimization for newer model capabilities.

Changes

File Change Summary
src/.../limits.ts Updated TokenLimits with a switch statement; added 'gpt-4-turbo-preview' case for increased token limits and a new knowledgeCutOff date.

🐇✨
In the realm of code, where logic plays,
A rabbit hopped, through the syntax maze.
With a switch and a case, it leaped ahead,
Setting limits high, where none dare tread.
'GPT-4 Turbo', it whispered with glee,
A new frontier, for you and for me.
🌟📚


Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

@pascal-zarrad pascal-zarrad deleted the feature/use-gpt-4 branch February 28, 2024 19:16
@pascal-zarrad pascal-zarrad restored the feature/use-gpt-4 branch February 28, 2024 19:16
Copy link
Contributor

github-actions bot commented Feb 28, 2024

Image description CodeRabbit

Walkthrough

This update enhances the GitHub PR template and TokenLimits constructor to support the gpt-4-turbo-preview model as a heavy model, replacing gpt-4. It also improves file exclusions and default values for model selection. The code readability and maintainability have been improved by replacing the if-else chain in TokenLimits constructor with a switch-case.

Changes

File(s) Summary
action.yml, src/limits.ts The GitHub PR template and TokenLimits constructor now support gpt-4-turbo-preview as a heavy model, replacing gpt-4. Default values for model selection and file exclusions have been updated. The if-else chain in TokenLimits constructor has been replaced with a switch-case for better readability and maintainability.

Uplevel your code reviews with CodeRabbit Pro

CodeRabbit Pro

If you like this project, please support us by purchasing the Pro version. The Pro version has advanced context, superior noise reduction and several proprietary improvements compared to the open source version. Moreover, CodeRabbit Pro is free for open source projects.

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image description CodeRabbit

Commits Files that changed from the base of the PR and between d5ec397 and bddbc9f commits.
Files selected (2)
  • action.yml (4)
  • src/limits.ts (1)
Files ignored due to filter (1)
  • dist/index.js
Review comments generated (6)
  • Review: 4
  • LGTM: 2

Tips

Chat with Image description CodeRabbit Bot (@coderabbitai)

  • Reply on review comments left by this bot to ask follow-up questions. A review comment is a comment on a diff or a file.
  • Invite the bot into a review comment chain by tagging @coderabbitai in a reply.

Code suggestions

  • The bot may make code suggestions, but please review them carefully before committing since the line number ranges may be misaligned.
  • You can edit the comment made by the bot and manually tweak the suggestion if it is slightly off.

Pausing incremental reviews

  • Add @coderabbitai: ignore anywhere in the PR description to pause further reviews from the bot.

src/limits.ts Show resolved Hide resolved
action.yml Show resolved Hide resolved
action.yml Show resolved Hide resolved
action.yml Show resolved Hide resolved
Copy link

@coderabbitaidev coderabbitaidev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 0

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between d5ec397 and bddbc9f.
Files ignored due to path filters (2)
  • action.yml is excluded by: !**/*.yml
  • dist/index.js is excluded by: !dist/**
Files selected for processing (1)
  • src/limits.ts (1 hunks)
Additional comments: 2
src/limits.ts (2)
  • 22-26: The addition of the gpt-4-turbo-preview case with increased token limits and a distinct knowledgeCutOff date is aligned with the PR objectives to support the gpt-4-turbo-preview model. This change effectively utilizes the model's capabilities for handling larger contexts. However, ensure that the significantly higher maxTokens value of 128000 and the knowledgeCutOff date of 2023-12-01 are consistent with the model's specifications and intended use cases. Additionally, consider documenting the rationale behind these specific values within the code or project documentation for future reference and clarity.
  • 32-32: The logic for calculating requestTokens by subtracting responseTokens and an additional 100 tokens from maxTokens is applied universally after the switch statement. This approach is consistent and maintains simplicity in the token management logic. However, it's important to verify that this margin of 100 tokens is sufficient and appropriate for all models, especially considering the significant increase in maxTokens for the gpt-4-turbo-preview model. If the margin is intended to account for overhead or ensure a buffer, it may be worth evaluating if a proportional margin relative to the maxTokens would be more appropriate for models with vastly different token limits.

@pascal-zarrad
Copy link
Author

This PR was meant for a fork. It can be ignored and has been closed.

@pascal-zarrad pascal-zarrad deleted the feature/use-gpt-4 branch February 28, 2024 19:21
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant