Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8sGPT operator not working with GEMINI backend #1393

Open
3 of 4 tasks
VedRatan opened this issue Mar 11, 2025 · 2 comments
Open
3 of 4 tasks

k8sGPT operator not working with GEMINI backend #1393

VedRatan opened this issue Mar 11, 2025 · 2 comments

Comments

@VedRatan
Copy link

Checklist

  • I've searched for similar issues and couldn't find anything matching
  • I've included steps to reproduce the behavior

Affected Components

  • K8sGPT (CLI)
  • K8sGPT Operator

K8sGPT Version

No response

Kubernetes Version

No response

Host OS and its Version

No response

Steps to reproduce

  1. Install k8sgpt-operator via helm.
  2. Create a secret for google gemini with apikey.
  3. Use google as a backend and model as stated in below YAML.
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: k8sgpt-operator-system
spec:
  ai:
    enabled: true
    model: gemini-2.0-flash
    backend: google
    secret:
      name: k8sgpt-sample-secret
      key: gemini-api-key
    # anonymized: false
    # language: english
  noCache: false
  version: v0.3.41
  # filters:
  #   - Ingress
  # sink:
  #   type: slack
  #   webhook: <webhook-url>
  # extraOptions:
  #   backstage:
  #     enabled: true
  1. Apply the k8sgpt CR.

Expected behaviour

result objects should get created in the default namespace.

Actual behaviour

Getting below logs in the k8s-sample pod

erateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1843,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:54822","status_code":2}
{"level":"info","ts":1741662633.7511296,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1754,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:54846","status_code":2}
{"level":"info","ts":1741662635.536725,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1756,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:54872","status_code":2}
{"level":"info","ts":1741662637.3092797,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1725,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:54882","status_code":2}
{"level":"info","ts":1741662639.1071768,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1736,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:54906","status_code":2}
{"level":"info","ts":1741662640.94626,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1734,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:54920","status_code":2}
{"level":"info","ts":1741662642.8558068,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1726,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:37214","status_code":2}
{"level":"info","ts":1741662644.948416,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1750,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:37226","status_code":2}
{"level":"info","ts":1741662647.365036,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1751,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:37254","status_code":2}
{"level":"info","ts":1741662650.4184694,"caller":"server/log.go:50","msg":"request failed. failed while calling AI provider google: googleapi: Error 400: * GenerateContentRequest.generation_config.max_output_tokens: max_output_tokens must be positive.","duration_ms":1750,"method":"/schema.v1.ServerAnalyzerService/Analyze","request":"backend:\"google\" explain:true anonymize:true language:\"english\" max_concurrency:10 output:\"json\"","remote_addr":"10.244.0.5:37260","status_code":2}

Additional Information

No response

@github-project-automation github-project-automation bot moved this to Proposed in Backlog Mar 11, 2025
@VedRatan VedRatan changed the title [Question]: <Question Title> k8sGPT operator not working with GEMINI backend Mar 11, 2025
@AlexsJones
Copy link
Member

Try this

apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: k8sgpt-operator-system
spec:
  ai:
    enabled: true
    model: gemini-2.0-flash
>>>>>   maxTokens: 1000
    backend: google
    secret:
      name: k8sgpt-sample-secret
      key: gemini-api-key
    # anonymized: false
    # language: english
  noCache: false
  version: v0.3.41
  # filters:
  #   - Ingress
  # sink:
  #   type: slack
  #   webhook: <webhook-url>
  # extraOptions:
  #   backstage:
  #     enabled: true

@AlexsJones
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Proposed
Development

No branches or pull requests

2 participants