Deployment in Kubernetes¶
Deploy Loggie DaemonSet¶
Make sure you have kubectl and helm executable locally.
Download helm-chart¶
VERSION=v1.3.0
helm pull https://github.com/loggie-io/installation/releases/download/${VERSION}/loggie-${VERSION}.tgz && tar xvzf loggie-${VERSION}.tgz
<VERSION>
above with the specific version number such as v1.3.0, which can be found release tag.
Modify Configuration¶
cd into chart directory:
cd installation/helm-chart
Check values.yml, and modify it as you like.
Following are the currently configurable parameters:
Image¶
image: loggieio/loggie:main
Resource¶
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 100m
memory: 100Mi
Additional CMD Arguments¶
extraArgs: {}
extraArgs:
log.level: debug
log.jsonFormat: false
Extra Mount¶
extraVolumeMounts:
- mountPath: /var/log/pods
name: podlogs
- mountPath: /var/lib/kubelet/pods
name: kubelet
- mountPath: /var/lib/docker
name: docker
extraVolumes:
- hostPath:
path: /var/log/pods
type: DirectoryOrCreate
name: podlogs
- hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
name: kubelet
- hostPath:
path: /var/lib/docker
type: DirectoryOrCreate
name: docker
Because Loggie itself is also deployed in a container, Loggie also needs to mount some volumes of nodes to collect logs. Otherwise, log files cannot be seen inside the Loggie container, and cannot be collected.
Here is a brief list of what paths need to be mounted when loggie collect different kinds of log:
-
Collect stdout: Loggie collects from /var/log/pods, so Loggie needs to mount:
But it is possible that log files under /var/log/pods will be soft-linked to the root path of docker. The default isvolumeMounts: - mountPath: /var/log/pods name: podlogs - mountPath: /var/lib/docker name: docker volumes: - hostPath: path: /var/log/pods type: DirectoryOrCreate name: podlogs - hostPath: path: /var/lib/docker type: DirectoryOrCreate name: docker
/var/lib/docker
. At this time,/var/lib/docker
needs to be mounted as well.If other runtime is used, such as containerd, there is no need to mount
/var/lib/docker
, Loggie will look for the actual standard output path from/var/log/pods
. -
Collect the logs mounted by the service Pod using HostPath: For example, if the business pods uniformly mount the logs to the
/data/logs
path of the node, you need to mount the path:volumeMounts: - mountPath: /data/logs name: logs volumes: - hostPath: path: /data/logs type: DirectoryOrCreate name: logs
-
Collect the logs mounted by the business Pod using EmptyDir: By default, emtpyDir will be in the
/var/lib/kubelet/pods
path of the node, so Loggie needs to mount this path. If configuration of kubelet is modified, it needs to be modified synchronously:volumeMounts: - mountPath: /var/lib/kubelet/pods name: kubelet volumes: - hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate name: kubelet
-
Collect the logs mounted by the service Pod using PV: Same as using EmptyDir.
-
No mount and
rootFsCollectionEnabled: true
: Loggie will automatically find the actual path in the container from the rootfs of the docker, and the root path of the docker needs to be mounted at this time:If the actual root path of docker is modified, the volumeMount and volume here need to be modified synchronously. For example, if the root path is modified tovolumeMounts: - mountPath: /var/lib/docker name: docker volumes: - hostPath: path: /var/lib/docker type: DirectoryOrCreate name: docker
/data/docker
, the mount is as follows:volumeMounts: - mountPath: /data/docker name: docker volumes: - hostPath: path: /data/docker type: DirectoryOrCreate name: docker
Note:
- Loggie needs to record the status of the collected files (offset, etc.) to avoid collecting files from the beginning after restarting. The default mounting path is /data/logie.db, so the
/data/loggie--{{ template "loggie.name" . }}
directory is mounted.
Schedule¶
nodeSelector: {}
affinity: {}
# podAntiAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# - labelSelector:
# matchExpressions:
# - key: app
# operator: In
# values:
# - loggie
# topologyKey: "kubernetes.io/hostname"
tolerations: []
# - effect: NoExecute
# operator: Exists
# - effect: NoSchedule
# operator: Exists
Updating Strategy¶
updateStrategy:
type: RollingUpdate
RollingUpdate
or OnDelete
.
Global Configuration¶
config:
loggie:
reload:
enabled: true
period: 10s
monitor:
logger:
period: 30s
enabled: true
listeners:
filesource: ~
filewatcher: ~
reload: ~
sink: ~
discovery:
enabled: true
kubernetes:
containerRuntime: containerd
fields:
container.name: containername
logConfig: logconfig
namespace: namespace
node.name: nodename
pod.name: podname
http:
enabled: true
port: 9196
containerRuntime: containerd
to specify the containerd runtime.
Service¶
If Loggie wants to receive data sent by other services, it needs to expose its own services through service.
Under normal circumstances, Loggie in Agent mode only needs to expose its own management port.
servicePorts:
- name: monitor
port: 9196
targetPort: 9196
Deploy¶
For the initial deployment, we specify that the deployment is under the loggie
namespace, and let helm automatically create the namespace.
helm install loggie ./ -nloggie --create-namespace
If loggie
namespace has been created in your environment, you can ignore -nloggie
and --create-namespace
. Of course, you can use your own namespace.
Kubernetes version issue
failed to install CRD crds/crds.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"
rm loggie/crds/crds.yaml
and reinstall it.
Check deployment status¶
After execution, use the helm command to check the deployment status:
helm list -nloggie
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
loggie loggie 1 2021-11-30 18:06:16.976334232 +0800 CST deployed loggie-v0.1.0 v0.1.0
At the same time, you can also use the kubectl command to check whether the Pod has been created.
kubectl -nloggie get po
loggie-sxxwh 1/1 Running 0 5m21s 10.244.0.5 kind-control-plane <none> <none>
Deploy Loggie Aggregator¶
Deploying Aggregator is basically the same as deploying Agent. In Helm chart we provide aggregator config
. Modify as enabled: true
.
StatefulSet method is provided in the helm chart, and you can also modify it to deployment and other methods according to your needs.
At the same time, you can add content in values.yaml according to the cases:
- nodeSelector or affinity. tolerations according to whether the node has taint. Make the Aggregator StatefulSet scheduled only on certain nodes.
- add port for service to receive data. For example, to use Grpc source, default 6066 needs to be specified.
servicePorts: - name: grpc port: 6066 targetPort: 6066
- add
cluster
field indiscovery.kubernetes
, which indicates the name of the aggregator cluster, which is used to distinguish Agent or other Loggie clusters, as shown below:config: loggie: discovery: enabled: true kubernetes: cluster: aggregator
Command reference:
helm install loggie-aggregator ./ -nloggie-aggregator --create-namespace
Note
The Loggie aggregator can also be deployed using Deployment or StatefulSet. Please refer to DaemonSet to modify the helm chart by yourself.