Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support fuse pods to launch in eager mode #4571

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

EvanCley
Copy link
Contributor

@EvanCley EvanCley commented Mar 7, 2025

Ⅰ. Describe what this PR does

support fuse pods to launch in both eager and lazy mode, default to lazy mode

Ⅱ. Does this pull request fix one issue?

no

Ⅲ. List the added test cases (unit test/integration test) if any, please explain if no tests are needed.

all included in the second commit

Ⅳ. Describe how to verify it

create dataset and any type of runtime with fuse.spec.launchMode, and then alluxio fuse pods will running on the nodes with nodeSelector labels

apiVersion: data.fluid.io/v1alpha1
kind: AlluxioRuntime
metadata:
  name: hbase
spec:
  replicas: 1
  tieredstore:
    levels:
      - mediumtype: MEM
        path: /dev/shm
        quota: 2Gi
        high: "0.95"
        low: "0.7"
  fuse:
    launchMode: Eager
    nodeSelector: 
         xxx: xx

Ⅴ. Special notes for reviews

  1. vineyardRuntime spec has no fuse.nodeSelector field, which will be raised in a new issue
  2. thinRuntimeProfiles fuse.nodeSelector will be covered by thinRuntime fuse.nodeSelector, which will be raised in a new issue
  3. illegal fuse nodeSelector labels verify not handled in this pr
  4. eager mode resource idle issue not handled in this pr

Copy link

fluid-e2e-bot bot commented Mar 7, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign zwwhdls for approval by writing /assign @zwwhdls in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

fluid-e2e-bot bot commented Mar 7, 2025

Hi @EvanCley. Thanks for your PR.

I'm waiting for a fluid-cloudnative member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@cheyang cheyang requested a review from Syspretor March 8, 2025 05:39
@cheyang
Copy link
Collaborator

cheyang commented Mar 8, 2025

@EvanCley Thank you for the contribution. Please help fix the ut issue:

W0307 16:43:38.447583   74201 client_config.go:623] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
--- FAIL: TestTransformFuseWithLaunchMode (0.00s)
    transform_fuse_test.go:148: check test fuse launch mode case 1 failure, got:map[],want:map[fuse_node:true]
    transform_fuse_test.go:148: check test fuse launch mode case 2 failure, got:map[],want:map[fluid.io/f-fluid-hbase:true fuse_node:true]
    transform_fuse_test.go:148: check test network mode case 3 failure, got:map[],want:map[fluid.io/f-fluid-hbase:true]
FAIL

@EvanCley
Copy link
Contributor Author

EvanCley commented Mar 8, 2025

@EvanCley Thank you for the contribution. Please help fix the ut issue:

W0307 16:43:38.447583   74201 client_config.go:623] error creating inClusterConfig, falling back to default config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
--- FAIL: TestTransformFuseWithLaunchMode (0.00s)
    transform_fuse_test.go:148: check test fuse launch mode case 1 failure, got:map[],want:map[fuse_node:true]
    transform_fuse_test.go:148: check test fuse launch mode case 2 failure, got:map[],want:map[fluid.io/f-fluid-hbase:true fuse_node:true]
    transform_fuse_test.go:148: check test network mode case 3 failure, got:map[],want:map[fluid.io/f-fluid-hbase:true]
FAIL

OK, the ut fix have done.

// if fuse launch mode is eager, set worker node affinity by fuse node selector
if e.runtimeInfo.GetFuseLaunchMode() == datav1alpha1.EagerMode {
for key, val := range e.runtimeInfo.GetFuseNodeSelector() {
workersToUpdate.Spec.Template.Spec.Affinity.NodeAffinity.PreferredDuringSchedulingIgnoredDuringExecution =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current implementation creates separate PreferredSchedulingTerm for each label, should we combine into single term?

term := corev1.NodeSelectorTerm{
    MatchExpressions: []corev1.NodeSelectorRequirement{
        {Key: "user-label1", Operator: In, Values: ["val1"]},
        {Key: "user-label2", Operator: In, Values: ["val2"]}
    }}
workers.Spec.Template.Spec.Affinity.NodeAffinity.PreferredDuringSchedulingIgnoredDuringExecution = 
    []corev1.PreferredSchedulingTerm{{Weight: 100, Preference: term}}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice suggestion, I will revise it

@cheyang
Copy link
Collaborator

cheyang commented Mar 12, 2025

/test fluid-e2e

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants