Skip to content

Actions: nod-ai/shark-ai

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
23,311 workflow runs
23,311 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Add sharding support for latent attention block
PkgCI #536: Pull request #935 synchronize by rsuderman
February 7, 2025 19:02 8m 11s rsuderman:latent_sharding
February 7, 2025 19:02 8m 11s
Add sharding support for latent attention block
Llama Benchmarking 8B Tests #2153: Pull request #935 synchronize by rsuderman
February 7, 2025 19:02 3m 56s rsuderman:latent_sharding
February 7, 2025 19:02 3m 56s
Add sharding support for latent attention block
CI - shortfin #1043: Pull request #935 synchronize by rsuderman
February 7, 2025 19:02 6m 17s rsuderman:latent_sharding
February 7, 2025 19:02 6m 17s
Add sharding support for latent attention block
pre-commit #4168: Pull request #935 synchronize by rsuderman
February 7, 2025 19:02 25s rsuderman:latent_sharding
February 7, 2025 19:02 25s
Add sharding support for latent attention block
CI - sharktank #2536: Pull request #935 synchronize by rsuderman
February 7, 2025 19:02 23m 48s rsuderman:latent_sharding
February 7, 2025 19:02 23m 48s
Add sharding support for latent attention block
CI - sharktank perplexity short #1468: Pull request #935 synchronize by rsuderman
February 7, 2025 19:02 2m 50s rsuderman:latent_sharding
February 7, 2025 19:02 2m 50s
Augment model_management.py to support llama2 25m trained on tinystories
CI - sharktank #2535: Pull request #936 opened by renxida
February 7, 2025 18:58 24m 29s renxida:tiny-llamas
February 7, 2025 18:58 24m 29s
Augment model_management.py to support llama2 25m trained on tinystories
Llama Benchmarking 8B Tests #2152: Pull request #936 opened by renxida
February 7, 2025 18:58 3m 54s renxida:tiny-llamas
February 7, 2025 18:58 3m 54s
Augment model_management.py to support llama2 25m trained on tinystories
CI - shortfin #1042: Pull request #936 opened by renxida
February 7, 2025 18:58 7m 31s renxida:tiny-llamas
February 7, 2025 18:58 7m 31s
Augment model_management.py to support llama2 25m trained on tinystories
CI - sharktank perplexity short #1467: Pull request #936 opened by renxida
February 7, 2025 18:58 2m 34s renxida:tiny-llamas
February 7, 2025 18:58 2m 34s
Add sharding support for latent attention block
pre-commit #4166: Pull request #935 opened by rsuderman
February 7, 2025 18:54 30s rsuderman:latent_sharding
February 7, 2025 18:54 30s
Add sharding support for latent attention block
CI - sharktank perplexity short #1466: Pull request #935 opened by rsuderman
February 7, 2025 18:54 2m 38s rsuderman:latent_sharding
February 7, 2025 18:54 2m 38s
Add sharding support for latent attention block
CI - sharktank #2534: Pull request #935 opened by rsuderman
February 7, 2025 18:54 8m 18s rsuderman:latent_sharding
February 7, 2025 18:54 8m 18s
Add sharding support for latent attention block
CI - shortfin #1041: Pull request #935 opened by rsuderman
February 7, 2025 18:54 6m 23s rsuderman:latent_sharding
February 7, 2025 18:54 6m 23s
Add sharding support for latent attention block
Llama Benchmarking 8B Tests #2151: Pull request #935 opened by rsuderman
February 7, 2025 18:54 3m 54s rsuderman:latent_sharding
February 7, 2025 18:54 3m 54s
[sharktank] restore custom matmul kernel
Llama Benchmarking 8B Tests #2150: Pull request #896 synchronize by dan-garvey
February 7, 2025 17:04 5m 58s users/dan-garvey/enable_custom_fp8_matmul
February 7, 2025 17:04 5m 58s
[sharktank] restore custom matmul kernel
CI - sharktank perplexity short #1465: Pull request #896 synchronize by dan-garvey
February 7, 2025 17:04 2m 34s users/dan-garvey/enable_custom_fp8_matmul
February 7, 2025 17:04 2m 34s
[sharktank] restore custom matmul kernel
CI - sharktank #2533: Pull request #896 synchronize by dan-garvey
February 7, 2025 17:04 15m 14s users/dan-garvey/enable_custom_fp8_matmul
February 7, 2025 17:04 15m 14s