-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add volumes support to ExecutorPlugin #13026
Comments
If no disagreement happy to provide the code implementation. |
As previously discussed on Slack, I couldn't think of any problems with it at the time, so I imagine it might've just never been implemented or requested. Giving it a bit more thought, the only possible issue I see with the proposal is where So I'm not sure about Implementation-wise, that would be around this part of the agent code. There's some existing volume mounts (for e.g. SA tokens) that have to be handled as well.
Also as I mentioned on Slack, there is support for using k8s Secrets in plugins already. We should add an example in those docs with this feature. |
Thank you for looking at it. The more I think about the problem and become familiar with the code, the less sure I am about the best place to define the volumes. I have a working solution if I just extend the git patch without tests: diff --git a/pkg/plugins/spec/plugin_types.go b/pkg/plugins/spec/plugin_types.go
index 28cdb73ce..61698d4ac 100644
--- a/pkg/plugins/spec/plugin_types.go
+++ b/pkg/plugins/spec/plugin_types.go
@@ -28,6 +28,7 @@ type Sidecar struct {
// AutomountServiceAccount mounts the service account's token. The service account must have the same name as the plugin.
AutomountServiceAccountToken bool `json:"automountServiceAccountToken,omitempty"`
Container apiv1.Container `json:"container"`
+ Volumes []apiv1.Volume `json:"volumes,omitempty"`
}
func (s Sidecar) Validate() error {
diff --git a/workflow/controller/agent.go b/workflow/controller/agent.go
index 4c94329f9..6d827c3f9 100644
--- a/workflow/controller/agent.go
+++ b/workflow/controller/agent.go
@@ -285,6 +285,7 @@ func (woc *wfOperationCtx) getExecutorPlugins(ctx context.Context) ([]apiv1.Cont
volumes = append(volumes, *volume)
c.VolumeMounts = append(c.VolumeMounts, *volumeMount)
}
+ volumes = append(volumes, s.Volumes...)
sidecars = append(sidecars, *c)
}
}
diff --git a/workflow/util/plugins/configmap.go b/workflow/util/plugins/configmap.go
index a97df800a..a0112c6df 100644
--- a/workflow/util/plugins/configmap.go
+++ b/workflow/util/plugins/configmap.go
@@ -38,6 +38,15 @@ func ToConfigMap(p *spec.Plugin) (*apiv1.ConfigMap, error) {
"sidecar.container": string(data),
},
}
+
+ if p.Spec.Sidecar.Volumes != nil {
+ volumes, err := yaml.Marshal(p.Spec.Sidecar.Volumes)
+ if err != nil {
+ return nil, err
+ }
+ cm.Data["sidecar.volumes"] = string(volumes)
+ }
+
for k, v := range p.Annotations {
cm.Annotations[k] = v
}
@@ -69,5 +78,8 @@ func FromConfigMap(cm *apiv1.ConfigMap) (*spec.Plugin, error) {
if err := yaml.UnmarshalStrict([]byte(cm.Data["sidecar.container"]), &p.Spec.Sidecar.Container); err != nil {
return nil, err
}
+ if err := yaml.UnmarshalStrict([]byte(cm.Data["sidecar.volumes"]), &p.Spec.Sidecar.Volumes); err != nil {
+ return nil, err
+ }
return p, p.Validate()
}
What I do not like about this solution is that, due to how the executor plugin works, whenever I use any plugin, the agent pod will always create/mount that volume. Before checking the code, I misunderstood that only the plugin you define in the workflow gets added to the agent pod, but it appears that ALL the plugins are always defined in the agent pod. For example If I define plugin-1 and use it in my workflow. The agent pod will looks something like: apiVersion: v1
kind: Pod
spec:
volumes:
# ...standard agent volumes...
containers:
- name: plugin-1
args: ...
- name: main
command:
- argoexec
args:
- agent
- main
- --loglevel
- debug
# ...
initContainers:
- name: init
args:
- agent
- init
- --loglevel
- debug
# ... If I add plugin-2 which uses a volume the agent pod will look like: apiVersion: v1
kind: Pod
spec:
volumes:
# ... standard agent volumes ...
# ... plugin-2 volumes ...
containers:
- name: plugin-1
args: ...
- name: plugin-2
args: ...
volumeMounts:
- ...
- name: main
args:
- agent
- init
- --loglevel
# ...
initContainers:
- name: init
args:
- agent
- init
- --loglevel
- debug
# ... From now on, whatever plugin I use in my workflow, the agent pod will still define all of them. This is really inefficient because the more plugins I define, the more resources I will need to run existing workflows with plugins, regardless of whether I use all of them or just one. Anyway back to the topic. Originally, I was mostly interested in using Anyway I hope the explanation make sense. I will try to reach to you on slack, because it may be easier to discuss it there. |
Summary
I was looking at the executor plugins spec definition, which is basically subset of the containers definition. I was wondering if we should also extend the spec with the Volumes definition or is there an security issue with that? I am mostly after a way how I could mount secrets/config maps volumes inside the plugin(s).
Use Cases
I believe there is value in having access to secret volumes, which can be managed through ESO for specific use cases where you want to dynamically manage secrets for plugins.
Message from the maintainers:
Love this feature request? Give it a 👍. We prioritise the proposals with the most 👍.
The text was updated successfully, but these errors were encountered: