Skip to content

Task/set centralized logging for minikube #54

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

SelahattinSert
Copy link
Owner

No description provided.

@SelahattinSert SelahattinSert requested a review from ozgen July 9, 2025 06:51
@SelahattinSert SelahattinSert self-assigned this Jul 9, 2025
@SelahattinSert SelahattinSert removed the request for review from ozgen July 9, 2025 07:21
@SelahattinSert SelahattinSert requested a review from ozgen July 10, 2025 08:42
Copy link
Collaborator

@ozgen ozgen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i added few comments @SelahattinSert

@@ -2,7 +2,6 @@ apiVersion: v1
kind: Service
metadata:
name: camera-onboarding-service
namespace: default
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you change the namespace of your app deployment?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it was default namespace back before, then i changed into app namespace is it wrong. Aslo I need to ask, is defining namespace.yaml the way did is wrong? @ozgen

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no wrong or right answer in this condition, you can predefine the namespaces before running kustomization command or you can also add namespace.yaml in the kustomization both of them are welcomed @SelahattinSert

@@ -1,4 +0,0 @@
# k8s/kustomization.yaml
resources:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you intentionally remove the separate kustomization.yaml files for each component?

Consider organizing your manifests into structured subfolders like the example:

k8s/
app/
deployment.yaml
service.yaml
kustomization.yaml
logging/
deployment.yaml
configmap.yaml
kustomization.yaml
kustomization.yaml # top-level, referencing both app/ and logging/

This way, the top-level kustomization.yaml can aggregate both the app and logging configurations. You can also define shared settings like namespaces and image overrides centrally.

- role: pod
relabel_configs:
- source_labels: [ __meta_kubernetes_pod_label_app ]
action: keep
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add filter to get logs ın specific namespace that your app installed.

- source_labels: [ __meta_kubernetes_pod_label_app ]
action: keep
regex: .+
- source_labels: [ __meta_kubernetes_namespace ]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you also include example logs from your application as they appear in Loki? It would help to verify the log structure and ensure compatibility with the current Loki configuration.

labels:
app: loki
spec:
containers:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to define a ServiceAccount for Loki or Grafana in this setup?
It might not be strictly necessary unless they need to interact with the Kubernetes API or access cluster resources via RBAC. If so, we should include minimal-permission ServiceAccounts to follow best practices. Let me know if this deployment assumes any special permissions.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In our logging stack only Promtail interacts with the Kubernetes API. So do you want me to configure ServiceAccount for promtail. I think there is no ServiceAccount configured for loki and grafana. @ozgen

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If only Promtail interacts with the Kubernetes API, then yes, let’s configure a minimal-permission ServiceAccount for Promtail to follow least privilege principles. No need to define one for Loki or Grafana unless they require special permissions in the future. @SelahattinSert

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- configmap.yaml
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did you need this kustomization file?

- name: config
configMap:
name: loki-config
- name: loki-storage
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you did not use persistentVolumeClaim, if yhe pod restarts, could we get the previous logs?


[security]
admin_user = admin
admin_password = admin
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please do not use plain text for passwords, this should be defined in the secret @SelahattinSert

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants