Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to configure cpu: "x" #1451

Closed
JCzz opened this issue Mar 30, 2020 · 10 comments
Closed

Ability to configure cpu: "x" #1451

JCzz opened this issue Mar 30, 2020 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@JCzz
Copy link

JCzz commented Mar 30, 2020

I have an Istio error: 1 Insufficient cpu.

Can you configure more cpu?

I am looking for something like:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
# the control plane node
- role: control-plane
- role: worker
  constraints:
    memory: "500m"
    cpu: "2"
@JCzz JCzz added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 30, 2020
@BenTheElder
Copy link
Member

you're not going to get more CPU by configuring kind, the CPU detected comes from your host :-)

I'm going to take a guess that you might be using docker desktop on mac / windows, in which case you need to configure it in the docker settings.

https://kind.sigs.k8s.io/docs/user/quick-start/#settings-for-docker-desktop

as for configuring limits this is #877

@BenTheElder BenTheElder added the triage/duplicate Indicates an issue is a duplicate of other open issue. label Mar 30, 2020
@BenTheElder
Copy link
Member

note that #877 is not going to solve your insufficient cpu issue, for that you need the host to have sufficient cpu.

@JCzz
Copy link
Author

JCzz commented Mar 31, 2020

Thanks for info

@tsouche
Copy link

tsouche commented Jun 20, 2020

Hello there, I am a newbie and I may ask a stupid question:

  • I run kind on a Ubuntu machine, with 32GB memory and 120 GB disk.
  • I need to run a Cassandra cluster on the Kind cluster, and each node needs at least 0.5 CPU and
    1GB memory.

When I look at the node, it gives this:

Capacity:
  cpu:                8
  ephemeral-storage:  114336932Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32757588Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  114336932Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32757588Ki
  pods:               110

so in theory, there is more than enough resources to go. However, when I try to deploy the cassandra deployment, the first Pod keeps in a status 'Pending' beccause of a lack of resources. And indeed, the Node resources look like this:

Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (1%)  100m (1%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)

The node does not get actually access to the available resources: it stays limited at 10% of a CPU and 50MB memory.

So, reading the exchange above and having read #887, I understand that I need to actually configure Docker on my host machine in order for Docker to allow the containers simulating the Kind nodes to grab more resources. Do I understand it right ?

@BenTheElder
Copy link
Member

  • Requests / Limits are Allocated resources from the running workload configurations. Allocatable is how much the node has.
  • node images for 1.16.x? #887 is completely unrelated?
  • no you do not need to configure anything in kind, the host, or docker here. The nodes are unconstrained and will each think they have the whole host's resources

You'd have to share the rest of the pod and node details to know the allocatable amount and pod requirements.

@BenTheElder
Copy link
Member

(also lots of our users are new to most of these things! No worries 🙃)

@BenTheElder
Copy link
Member

Docker settings do need changing on Mac or windows where the docker desktop app maintains a VM to run docker in, that VM becomes the kind host and has limited resources.

@tsouche
Copy link

tsouche commented Jun 20, 2020

Here is the allocatable for one of the worker node (and there is not difference between the worker nodes):

Allocatable:
  cpu:                8
  ephemeral-storage:  114336932Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32757588Ki
  pods:               110

In parallel, the Cassandra service is defined:

$ kubectl get svc/cassandra
NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cassandra   ClusterIP   None         <none>        9042/TCP   14s

and the Statefulset does not start because no Pod is created:

$ kubectl apply -f app-cassandra/cassandra-statefulset.yaml
statefulset.apps/cassandra created
storageclass.storage.k8s.io/fast created

$ kubectl get pods -l="app=cassandra" -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
cassandra-0   0/1     Pending   0          2m    <none>   <none>   <none>           <none>

so I guess if this is not a CPU/memory allocation issue, I will need find other causes. I guess I need to figure out how to extract the logs for the Statefulset...

@BenTheElder : many thanks for the help anyway :-)

[EDIT]: it actually looks like an issue with the Pod binding to the PVC... No link with resource allocation.

@BenTheElder
Copy link
Member

for PVCs you'll want to be on kind 0.7+, ideally the latest release.

if you describe the pods you can see readiness issues in better detail than get

@tsouche
Copy link

tsouche commented Jun 20, 2020

I'm running on Kind 0.8.1, and I thought that using the default "standard" storage class (rancher.io/localpath) would enable the cassandra statefulset to get PVC bound to the local storage... but for some reason (hwich I don't understand) it does not work:

$ kubectl describe pod cassandra-0
Name:           cassandra-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app=cassandra
                controller-revision-hash=cassandra-95df4dff4
                statefulset.kubernetes.io/pod-name=cassandra-0
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  StatefulSet/cassandra
Containers:
  cassandra:
    Image:       gcr.io/google-samples/cassandra:v13
    Ports:       7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
    Limits:
      cpu:     500m
      memory:  1Gi
    Requests:
      cpu:      500m
      memory:   1Gi
    Readiness:  exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
    Environment:
      MAX_HEAP_SIZE:           512M
      HEAP_NEWSIZE:            100M
      CASSANDRA_SEEDS:         cassandra-0.cassandra.default.svc.cluster.local
      CASSANDRA_CLUSTER_NAME:  K8Demo
      CASSANDRA_DC:            DC1-K8Demo
      CASSANDRA_RACK:          Rack1-K8Demo
      POD_IP:                   (v1:status.podIP)
    Mounts:
      /cassandra_data from cassandra-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-sbwpx (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  cassandra-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  cassandra-data-cassandra-0
    ReadOnly:   false
  default-token-sbwpx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-sbwpx
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "cassandra-0": pod has unbound immediate PersistentVolumeClaims
  Warning  FailedScheduling  <unknown>  default-scheduler  running "VolumeBinding" filter plugin for pod "cassandra-0": pod has unbound immediate PersistentVolumeClaims 

and

$ kubectl describe storageclass/local-path
Name:            local-path
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-path"},"provisioner":"rancher.io/local-path","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}

Provisioner:           rancher.io/local-path
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

so I will have to drille further until I understand. 😞

[EDIT]: I think that I found the issue. The default cassandra-stateful.yaml file I am using was not pointing to the right storageclass. My big and shameful mistake... but I'm happy I found it 😄

$ kubectl get pods -o wide
NAME          READY   STATUS    RESTARTS   AGE    IP           NODE               NOMINATED NODE   READINESS GATES
cassandra-0   1/1     Running   0          3m3s   10.244.1.3   k8s-tuto-worker4   <none>           <none>
cassandra-1   0/1     Running   0          15s    10.244.3.4   k8s-tuto-worker2   <none>           <none>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

3 participants