-
Notifications
You must be signed in to change notification settings - Fork 92
Conversation
…ing anyway will resolve deadlock later
Deadlock was because I branched from master before merging the changes from multinode_support. After sync'ing up there appears to be no more deadlock. |
Todo:
|
Replication control looks functional at this point. @ConnorDoyle PTAL |
Taking a look now, thanks @jdef |
return nil | ||
} | ||
taskId := &mesos.TaskID{Value: proto.String(task.ID)} | ||
return k.Driver.KillTask(taskId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the executor wait around for the pod to be destroyed by the local kubelet before sending back a TaskStatus with state TASK_KILLED
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, that's a TODO item. Currently the executor sends a SET message to the kublet with the collection of all pods minus the pod to delete: this causes the kubelet to delete the pod when transitioning to the new desired end state. there appears to be an /events endpoint in the kubelet that we may be able to watch in order to pick up on such an event
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ConnorDoyle What's the right way to send the message "kill this pod that I launched but has not yet reported RUNNING back to master yet"?
I think the code here is doing the right thing, and it's the code in the executor that needs to account for various pod states when it receives the KILL signal, right?
…state with SET calls
…er to replace hardcoded etcd namespace, eliminate unsupported endpoints
…back into Go here and go panics when we tried to delete from slave.offers; thinking that the slave entry was undefined
…ey are in the running list
…king problems ahead...
current status:
thinking that the networking piece should be resolved in another PR (would be nice to have Vagrant set it all up for us). |
…urrent and desired state; update desired state host at binding time
@ConnorDoyle PTAL. Planning to merge this soon. |
@@ -6,7 +6,15 @@ When [Google Kubernetes](https://github.com/GoogleCloudPlatform/kubernetes) meet | |||
|
|||
[![GoDoc] (https://godoc.org/github.com/mesosphere/kubernetes-mesos?status.png)](https://godoc.org/github.com/mesosphere/kubernetes-mesos) | |||
|
|||
Kubernetes and Mesos are a match made in heaven. Kubernetes enables the Pod (group of co-located containers) abstraction, along with Pod labels for service discovery, load-balancing, and replication control. Mesos provides the fine-grained resource allocations for pods across nodes in a cluster, and can make Kubernetes play nicely with other frameworks running on the same cluster resources. Within the Kubernetes framework for Mesos, the framework scheduler first registers with Mesos and begins watching etcd's pod registry, and then Mesos offers the scheduler sets of available resources from the cluster nodes (slaves/minions). The scheduler matches Mesos' resource offers to unassigned Kubernetes pods, and then sends a launchTasks message to the Mesos master, which claims the resources and forwards the request onto the appropriate slave. The slave then fetches the kubelet/executor and starts running it. Once the scheduler knows that there are resource claimed for the kubelet to launch its pod, the scheduler writes a Binding to etcd to assign the pod to a specific host. The appropriate kubelet notices the assignment, pulls down the pod, and runs it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
@jdef looks great, left some comments which are mostly minor. Thanks! |
@adam-mesos @ConnorDoyle Thanks for the feedback, I've pushed some commits to address the concerns. |
Let's get this in! Thanks for all your hard work James. |
Development branch to support k8s replication controller.
In progress, unstable. Use at your own risk.