{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":670712691,"defaultBranch":"main","name":"kube","ownerLogin":"gdt-dev","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2023-07-25T16:57:09.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/140431742?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1717253774.0","currentOid":""},"activityList":{"items":[{"before":"d778db486f5709e5bdce816ebe805768e9924c5a","after":"064e0e337cdc3374bd6b42d01f209afd79660e78","ref":"refs/heads/placement","pushedAt":"2024-06-01T15:49:07.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"add placement spread assertions\n\nThe `assert.placement` field of a `gdt-kube` test Spec allows a test author to\nspecify the expected scheduling outcome for a set of Pods returned by the\nKubernetes API server from the result of a `kube.get` call.\n\nSuppose you have a Deployment resource with a `TopologySpreadConstraints` that\nspecifies the Pods in the Deployment must land on different hosts:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: nginx\n```\n\nYou can create a `gdt-kube` test case that verifies that your `nginx`\nDeployment's Pods are evenly spread across all available hosts:\n\n```yaml\ntests:\n - kube:\n get: deployments/nginx\n assert:\n placement:\n spread: kubernetes.io/hostname\n```\n\nIf there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`\nwill ensure that each Pod landed on a unique host. If there are fewer hosts\nthan the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there\nis an even spread of Pods to hosts, with any host having no more than one more\nPod than any other.\n\nDebug/trace output includes information on how the placement spread\nlooked like to the gdt-kube placement spread asserter:\n\n```\njaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go\n=== RUN TestPlacementSpread\n=== RUN TestPlacementSpread/placement-spread\n[gdt] [placement-spread] kube: create [ns: default]\n[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true\n[gdt] [placement-spread] using timeout of 40s (expected: false)\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]\n[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true\n[gdt] [placement-spread] kube: delete [ns: default]\n[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true\n\n--- PASS: TestPlacementSpread (4.98s)\n --- PASS: TestPlacementSpread/placement-spread (4.96s)\nPASS\nok \tcommand-line-arguments\t4.993s\n```\n\nIssue #7\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"add placement spread assertions"}},{"before":"43fea83a3e1c3d5bcaf3c75a9e77f9d8ac03a919","after":"d778db486f5709e5bdce816ebe805768e9924c5a","ref":"refs/heads/placement","pushedAt":"2024-06-01T15:48:22.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"add placement spread assertions\n\nThe `assert.placement` field of a `gdt-kube` test Spec allows a test author to\nspecify the expected scheduling outcome for a set of Pods returned by the\nKubernetes API server from the result of a `kube.get` call.\n\nSuppose you have a Deployment resource with a `TopologySpreadConstraints` that\nspecifies the Pods in the Deployment must land on different hosts:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: nginx\n```\n\nYou can create a `gdt-kube` test case that verifies that your `nginx`\nDeployment's Pods are evenly spread across all available hosts:\n\n```yaml\ntests:\n - kube:\n get: deployments/nginx\n assert:\n placement:\n spread: kubernetes.io/hostname\n```\n\nIf there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`\nwill ensure that each Pod landed on a unique host. If there are fewer hosts\nthan the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there\nis an even spread of Pods to hosts, with any host having no more than one more\nPod than any other.\n\nDebug/trace output includes information on how the placement spread\nlooked like to the gdt-kube placement spread asserter:\n\n```\njaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go\n=== RUN TestPlacementSpread\n=== RUN TestPlacementSpread/placement-spread\n[gdt] [placement-spread] kube: create [ns: default]\n[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true\n[gdt] [placement-spread] using timeout of 40s (expected: false)\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]\n[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true\n[gdt] [placement-spread] kube: delete [ns: default]\n[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true\n\n--- PASS: TestPlacementSpread (4.98s)\n --- PASS: TestPlacementSpread/placement-spread (4.96s)\nPASS\nok \tcommand-line-arguments\t4.993s\n```\n\nIssue #7\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"add placement spread assertions"}},{"before":"0773a69e026ee5a1cd6d88e1372c027d34dbf865","after":"43fea83a3e1c3d5bcaf3c75a9e77f9d8ac03a919","ref":"refs/heads/placement","pushedAt":"2024-06-01T15:46:05.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"add placement spread assertions\n\nThe `assert.placement` field of a `gdt-kube` test Spec allows a test author to\nspecify the expected scheduling outcome for a set of Pods returned by the\nKubernetes API server from the result of a `kube.get` call.\n\nSuppose you have a Deployment resource with a `TopologySpreadConstraints` that\nspecifies the Pods in the Deployment must land on different hosts:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: nginx\n```\n\nYou can create a `gdt-kube` test case that verifies that your `nginx`\nDeployment's Pods are evenly spread across all available hosts:\n\n```yaml\ntests:\n - kube:\n get: deployments/nginx\n assert:\n placement:\n spread: kubernetes.io/hostname\n```\n\nIf there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`\nwill ensure that each Pod landed on a unique host. If there are fewer hosts\nthan the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there\nis an even spread of Pods to hosts, with any host having no more than one more\nPod than any other.\n\nDebug/trace output includes information on how the placement spread\nlooked like to the gdt-kube placement spread asserter:\n\n```\njaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go\n=== RUN TestPlacementSpread\n=== RUN TestPlacementSpread/placement-spread\n[gdt] [placement-spread] kube: create [ns: default]\n[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true\n[gdt] [placement-spread] using timeout of 40s (expected: false)\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]\n[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true\n[gdt] [placement-spread] kube: delete [ns: default]\n[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true\n\n--- PASS: TestPlacementSpread (4.98s)\n --- PASS: TestPlacementSpread/placement-spread (4.96s)\nPASS\nok \tcommand-line-arguments\t4.993s\n```\n\nIssue #7\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"add placement spread assertions"}},{"before":"779092c88fd2668901165c586d821f6644334419","after":"0773a69e026ee5a1cd6d88e1372c027d34dbf865","ref":"refs/heads/placement","pushedAt":"2024-06-01T15:42:01.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"add placement spread assertions\n\nThe `assert.placement` field of a `gdt-kube` test Spec allows a test author to\nspecify the expected scheduling outcome for a set of Pods returned by the\nKubernetes API server from the result of a `kube.get` call.\n\nSuppose you have a Deployment resource with a `TopologySpreadConstraints` that\nspecifies the Pods in the Deployment must land on different hosts:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: nginx\n```\n\nYou can create a `gdt-kube` test case that verifies that your `nginx`\nDeployment's Pods are evenly spread across all available hosts:\n\n```yaml\ntests:\n - kube:\n get: deployments/nginx\n assert:\n placement:\n spread: kubernetes.io/hostname\n```\n\nIf there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`\nwill ensure that each Pod landed on a unique host. If there are fewer hosts\nthan the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there\nis an even spread of Pods to hosts, with any host having no more than one more\nPod than any other.\n\nDebug/trace output includes information on how the placement spread\nlooked like to the gdt-kube placement spread asserter:\n\n```\njaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go\n=== RUN TestPlacementSpread\n=== RUN TestPlacementSpread/placement-spread\n[gdt] [placement-spread] kube: create [ns: default]\n[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true\n[gdt] [placement-spread] using timeout of 40s (expected: false)\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]\n[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true\n[gdt] [placement-spread] kube: delete [ns: default]\n[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true\n\n--- PASS: TestPlacementSpread (4.98s)\n --- PASS: TestPlacementSpread/placement-spread (4.96s)\nPASS\nok \tcommand-line-arguments\t4.993s\n```\n\nIssue #7\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"add placement spread assertions"}},{"before":null,"after":"779092c88fd2668901165c586d821f6644334419","ref":"refs/heads/placement","pushedAt":"2024-06-01T14:56:14.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"add placement spread assertions\n\nThe `assert.placement` field of a `gdt-kube` test Spec allows a test author to\nspecify the expected scheduling outcome for a set of Pods returned by the\nKubernetes API server from the result of a `kube.get` call.\n\nSuppose you have a Deployment resource with a `TopologySpreadConstraints` that\nspecifies the Pods in the Deployment must land on different hosts:\n\n```yaml\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: nginx-deployment\n labels:\n app: nginx\nspec:\n replicas: 3\n selector:\n matchLabels:\n app: nginx\n template:\n metadata:\n labels:\n app: nginx\n spec:\n containers:\n - name: nginx\n image: nginx:latest\n ports:\n - containerPort: 80\n topologySpreadConstraints:\n - maxSkew: 1\n topologyKey: kubernetes.io/hostname\n whenUnsatisfiable: DoNotSchedule\n labelSelector:\n matchLabels:\n app: nginx\n```\n\nYou can create a `gdt-kube` test case that verifies that your `nginx`\nDeployment's Pods are evenly spread across all available hosts:\n\n```yaml\ntests:\n - kube:\n get: deployments/nginx\n assert:\n placement:\n spread: kubernetes.io/hostname\n```\n\nIf there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`\nwill ensure that each Pod landed on a unique host. If there are fewer hosts\nthan the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there\nis an even spread of Pods to hosts, with any host having no more than one more\nPod than any other.\n\nDebug/trace output includes information on how the placement spread\nlooked like to the gdt-kube placement spread asserter:\n\n```\njaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go\n=== RUN TestPlacementSpread\n=== RUN TestPlacementSpread/placement-spread\n[gdt] [placement-spread] kube: create [ns: default]\n[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true\n[gdt] [placement-spread] using timeout of 40s (expected: false)\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false\n[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false\n[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false\n[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false\n[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true\n[gdt] [placement-spread] kube: get [ns: default]\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3\n[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]\n[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true\n[gdt] [placement-spread] kube: delete [ns: default]\n[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true\n\n--- PASS: TestPlacementSpread (4.98s)\n --- PASS: TestPlacementSpread/placement-spread (4.96s)\nPASS\nok \tcommand-line-arguments\t4.993s\n```\n\nIssue #7\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"add placement spread assertions"}},{"before":"c58e23725701b6f721e96042d1575b48f4d39bf4","after":"cd9bc31cac3e6c4a7d356fdab2639f51e03d393c","ref":"refs/heads/main","pushedAt":"2024-05-27T19:14:29.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Merge pull request #10 from jaypipes/multi-kind\n\nadd ability to pass a KinD config to fixture","shortMessageHtmlLink":"Merge pull request #10 from jaypipes/multi-kind"}},{"before":"f2cbad81bd87d7c0f2f052df4b1b9ed1e41bbb71","after":"c58e23725701b6f721e96042d1575b48f4d39bf4","ref":"refs/heads/main","pushedAt":"2024-05-27T14:50:42.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Merge pull request #8 from jaypipes/fix-fail-eval\n\nfix failure evaluation","shortMessageHtmlLink":"Merge pull request #8 from jaypipes/fix-fail-eval"}},{"before":"f2cbad81bd87d7c0f2f052df4b1b9ed1e41bbb71","after":null,"ref":"refs/tags/V1.3.0","pushedAt":"2024-02-18T22:28:18.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"}},{"before":"03ea631fc6db90f787790934919d59e5b129379e","after":"f2cbad81bd87d7c0f2f052df4b1b9ed1e41bbb71","ref":"refs/heads/main","pushedAt":"2024-02-18T22:18:32.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Merge pull request #6 from jaypipes/action-refactor\n\nrefactor kube plugin's Action system","shortMessageHtmlLink":"Merge pull request #6 from jaypipes/action-refactor"}},{"before":"fbe369433eba1ebccd7b15451f32e301985a2cb0","after":"03ea631fc6db90f787790934919d59e5b129379e","ref":"refs/heads/main","pushedAt":"2024-01-15T15:54:21.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"a-hilaly","name":"Amine","path":"/a-hilaly","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10897901?s=80&v=4"},"commit":{"message":"Merge pull request #5 from jaypipes/gdt-v1.3.0\n\nbring in gdt-1.3.0","shortMessageHtmlLink":"Merge pull request #5 from jaypipes/gdt-v1.3.0"}},{"before":null,"after":"d94d24c8ee95c85dd0f2d3c52f167a9c76ce494d","ref":"refs/heads/on-fail","pushedAt":"2023-08-18T21:30:56.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"save: on-fail","shortMessageHtmlLink":"save: on-fail"}},{"before":"0190c8b1c4fe8e243c8ec7b089e930a1838b644e","after":"fbe369433eba1ebccd7b15451f32e301985a2cb0","ref":"refs/heads/main","pushedAt":"2023-08-16T13:00:34.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"path_formats -> path-formats","shortMessageHtmlLink":"path_formats -> path-formats"}},{"before":"35a5d5855b198f8abce60bda2f7489cf64b40e63","after":"0190c8b1c4fe8e243c8ec7b089e930a1838b644e","ref":"refs/heads/main","pushedAt":"2023-08-15T16:01:29.000Z","pushType":"pr_merge","commitsCount":4,"pusher":{"login":"a-hilaly","name":"Amine","path":"/a-hilaly","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10897901?s=80&v=4"},"commit":{"message":"Merge pull request #3 from gdt-dev/action-parse\n\nrework label selector and resource identifier type","shortMessageHtmlLink":"Merge pull request #3 from gdt-dev/action-parse"}},{"before":null,"after":"42953f07a270839558f0136d3d90cf297752c87e","ref":"refs/heads/action-parse","pushedAt":"2023-08-14T21:56:44.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"rework label selector and resource identifier type\n\n**This introduces a breaking API change**\n\nPreviously, the following YAML was used to select (or delete) resources\nin a `gdt-kube` test spec:\n\n```yaml\ntests:\n - kube.get: pods\n with:\n labels:\n app: nginx\n```\n\nThis functionality has been changed to use the following format instead:\n\n```yaml\ntests:\n - kube:\n get:\n type: pods\n labels:\n app: nginx\n```\n\nor using the `kube.get` shortcut, like so:\n\n```yaml\ntests:\n - kube.get:\n type: pods\n labels:\n app: nginx\n```\n\nThis was changed in order to better accomodate additional Kubernetes\nactions coming in future PRs, including `logs` and `exec` actions, as\nwell as to standardize the parsing of resource identifiers and label\nselectors.\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"rework label selector and resource identifier type"}},{"before":"044c1fe4849b7aa0f37df4c27d3c5acc5fc18485","after":"35a5d5855b198f8abce60bda2f7489cf64b40e63","ref":"refs/heads/main","pushedAt":"2023-08-08T15:52:23.000Z","pushType":"pr_merge","commitsCount":2,"pusher":{"login":"a-hilaly","name":"Amine","path":"/a-hilaly","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/10897901?s=80&v=4"},"commit":{"message":"Merge pull request #2 from gdt-dev/issue-8\n\nfix: call t.Error() from test spec","shortMessageHtmlLink":"Merge pull request #2 from gdt-dev/issue-8"}},{"before":null,"after":"cd15a65383605634d5feb02d01f4731960c76254","ref":"refs/heads/issue-8","pushedAt":"2023-08-08T15:16:59.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"fix: call t.Error() from test spec\n\nBrings in gdt@v1.1.1 and ensures that the test units/specs call\n`testing.T.Error()` instead of relying on the `Scenario.Run()` to do\nthat.\n\nAlso adds a custom YAML unmarshaler for the `Expect` struct and adds\nbetter parse-time errors for the `matches` field as requested in Issue\n8.\n\nAddresses Issue gdt-dev/gdt#8\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"fix: call t.Error() from test spec"}},{"before":"3f093419886ffc6492a57e6fabd8cd19d8f438ba","after":"044c1fe4849b7aa0f37df4c27d3c5acc5fc18485","ref":"refs/heads/main","pushedAt":"2023-07-30T15:40:10.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Fix stray backtick in README","shortMessageHtmlLink":"Fix stray backtick in README"}},{"before":"fa84f1c7f2cdcbe5bf461a3124d62d0822d458e2","after":"3f093419886ffc6492a57e6fabd8cd19d8f438ba","ref":"refs/heads/main","pushedAt":"2023-07-30T15:38:32.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Rename `kube.assert` to just `assert`\n\nBreaking API change. In order to align with the `exec` and `http`\nplugins, this brings the assertions contained in the `kube.assert` field\ninto a top-level Spec field called `assert`.\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"Rename kube.assert to just assert"}},{"before":"0a4b8f9f42ac7a798a4e79d12db455f4506549fe","after":"fa84f1c7f2cdcbe5bf461a3124d62d0822d458e2","ref":"refs/heads/main","pushedAt":"2023-07-26T00:48:37.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Remove extraneous build badge on README","shortMessageHtmlLink":"Remove extraneous build badge on README"}},{"before":"3faa2e84efccf27bfb0d0b732ca6d34f5ac20bc5","after":"0a4b8f9f42ac7a798a4e79d12db455f4506549fe","ref":"refs/heads/main","pushedAt":"2023-07-26T00:44:32.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"audit ports for KinD runner","shortMessageHtmlLink":"audit ports for KinD runner"}},{"before":"6870ee24e485e9fb0aa27ca8ab2db5ed6c9a6a05","after":"3faa2e84efccf27bfb0d0b732ca6d34f5ac20bc5","ref":"refs/heads/main","pushedAt":"2023-07-26T00:41:22.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"open ports up for Kind runner","shortMessageHtmlLink":"open ports up for Kind runner"}},{"before":"3770360a1ed7613f03b6a25e87c79c67c9d3e575","after":"6870ee24e485e9fb0aa27ca8ab2db5ed6c9a6a05","ref":"refs/heads/main","pushedAt":"2023-07-26T00:39:30.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Add docker.io to GH Action allowlist","shortMessageHtmlLink":"Add docker.io to GH Action allowlist"}},{"before":"6d4d01f2d390baa4ee4a1b409499eb44e733015c","after":"3770360a1ed7613f03b6a25e87c79c67c9d3e575","ref":"refs/heads/main","pushedAt":"2023-07-26T00:37:02.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Add storage.googleapis.com to GH Action allowlist","shortMessageHtmlLink":"Add storage.googleapis.com to GH Action allowlist"}},{"before":"7247fde61d8a05424d7ad408b1ff342d1a67409d","after":"6d4d01f2d390baa4ee4a1b409499eb44e733015c","ref":"refs/heads/main","pushedAt":"2023-07-26T00:35:44.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Add objects.githubusercontent.com to GH Action allowlist","shortMessageHtmlLink":"Add objects.githubusercontent.com to GH Action allowlist"}},{"before":"000067f012321a679d864888071fffecbee2adc0","after":"7247fde61d8a05424d7ad408b1ff342d1a67409d","ref":"refs/heads/main","pushedAt":"2023-07-26T00:32:55.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"fix YAML in GH Actions workflow","shortMessageHtmlLink":"fix YAML in GH Actions workflow"}},{"before":"fa44a01046b21e323acc2b8c029bfe389125c36f","after":"000067f012321a679d864888071fffecbee2adc0","ref":"refs/heads/main","pushedAt":"2023-07-26T00:32:00.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Re-enable KinD skip in gates","shortMessageHtmlLink":"Re-enable KinD skip in gates"}},{"before":"e8b078ae24aa0f6a156fd0569d054f2e996026be","after":"fa44a01046b21e323acc2b8c029bfe389125c36f","ref":"refs/heads/main","pushedAt":"2023-07-26T00:25:31.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"Add GH Actions","shortMessageHtmlLink":"Add GH Actions"}},{"before":null,"after":"e8b078ae24aa0f6a156fd0569d054f2e996026be","ref":"refs/heads/main","pushedAt":"2023-07-26T00:22:38.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"jaypipes","name":"Jay Pipes","path":"/jaypipes","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/265023?s=80&v=4"},"commit":{"message":"migrate jaypipes/gdt-kube to gdt-dev/kube\n\nPulls in the code from `github.com/jaypipes/gdt-kube` and adapts it to\nthe new API in `github.com/gdt-dev/gdt`.\n\nSigned-off-by: Jay Pipes ","shortMessageHtmlLink":"migrate jaypipes/gdt-kube to gdt-dev/kube"}}],"hasNextPage":false,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEWcpkhAA","startCursor":null,"endCursor":null}},"title":"Activity · gdt-dev/kube"}