1
0
Fork 0
forked from barak/tarpoon

Add glide.yaml and vendor deps

This commit is contained in:
Dalton Hubble 2016-12-03 22:43:32 -08:00
parent db918f12ad
commit 5b3d5e81bd
18880 changed files with 5166045 additions and 1 deletions

5
vendor/k8s.io/kubernetes/docs/OWNERS generated vendored Normal file
View file

@ -0,0 +1,5 @@
assignees:
- bgrant0607
- brendandburns
- smarterclayton
- thockin

32
vendor/k8s.io/kubernetes/docs/README.md generated vendored Normal file
View file

@ -0,0 +1,32 @@
# Kubernetes Documentation: releases.k8s.io/HEAD
* The [User's guide](user-guide/README.md) is for anyone who wants to run programs and
services on an existing Kubernetes cluster.
* The [Cluster Admin's guide](admin/README.md) is for anyone setting up
a Kubernetes cluster or administering it.
* The [Developer guide](devel/README.md) is for anyone wanting to write
programs that access the Kubernetes API, write plugins or extensions, or
modify the core code of Kubernetes.
* The [Kubectl Command Line Interface](user-guide/kubectl/kubectl.md) is a detailed reference on
the `kubectl` CLI.
* The [API object documentation](api-reference/README.md)
is a detailed description of all fields found in core API objects.
* An overview of the [Design of Kubernetes](design/)
* There are example files and walkthroughs in the [examples](../examples/)
folder.
* If something went wrong, see the [troubleshooting](http://kubernetes.io/docs/troubleshooting/) guide for how to debug.
You should also check the [known issues for the release](../CHANGELOG.md) you're using.
* To report a security issue, see [Reporting a Security Issue](reporting-security-issues.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/README.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/accessing-the-api/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/accessing-the-api.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/admission-controllers/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/admission-controllers.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/authentication/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/authentication.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/authorization.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/authorization/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/authorization.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/cluster-components/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-components.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/cluster-large.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/cluster-large/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-large.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/cluster-management/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-management.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/cluster-troubleshooting/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/cluster-troubleshooting.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/daemons.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/daemons/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/daemons.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/dns.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/dns/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/dns.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/etcd.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/etcd/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/etcd.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,8 @@
This file is autogenerated, but we've stopped checking such files into the
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
populate this file.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/federation-apiserver.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,8 @@
This file is autogenerated, but we've stopped checking such files into the
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
populate this file.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/federation-controller-manager.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/garbage-collection/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/garbage-collection.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/high-availability/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/high-availability.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/introduction.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/introduction/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/introduction.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,8 @@
This file is autogenerated, but we've stopped checking such files into the
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
populate this file.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kube-apiserver.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,8 @@
This file is autogenerated, but we've stopped checking such files into the
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
populate this file.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kube-controller-manager.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

8
vendor/k8s.io/kubernetes/docs/admin/kube-proxy.md generated vendored Normal file
View file

@ -0,0 +1,8 @@
This file is autogenerated, but we've stopped checking such files into the
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
populate this file.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kube-proxy.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,8 @@
This file is autogenerated, but we've stopped checking such files into the
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
populate this file.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kube-scheduler.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

8
vendor/k8s.io/kubernetes/docs/admin/kubelet.md generated vendored Normal file
View file

@ -0,0 +1,8 @@
This file is autogenerated, but we've stopped checking such files into the
repository to reduce the need for rebases. Please run hack/generate-docs.sh to
populate this file.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/kubelet.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/limitrange/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/limitrange/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/master-node-communication/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/master-node-communication.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/multi-cluster.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/multi-cluster/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/multi-cluster.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/namespaces.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/namespaces/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/namespaces.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/namespaces/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/namespaces/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/network-plugins/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/network-plugins.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/networking.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/networking/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/networking.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/node.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/node/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/node.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/ovs-networking/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/ovs-networking.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/resource-quota/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/resource-quota.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/resourcequota/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/resourcequota/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/salt.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/salt/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/salt.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/service-accounts-admin/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/service-accounts-admin.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

6
vendor/k8s.io/kubernetes/docs/admin/static-pods.md generated vendored Normal file
View file

@ -0,0 +1,6 @@
This file has moved to: http://kubernetes.github.io/docs/admin/static-pods/
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/admin/static-pods.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

13
vendor/k8s.io/kubernetes/docs/api-reference/README.md generated vendored Normal file
View file

@ -0,0 +1,13 @@
# API Reference
Use the following reference docs to understand the kubernetes REST API for various API group versions:
* v1: [operations](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/operations.html), [model definitions](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html)
* extensions/v1beta1: [operations](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/extensions/v1beta1/operations.html), [model definitions](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/extensions/v1beta1/definitions.html)
* batch/v1: [operations](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/batch/v1/operations.html), [model definitions](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/batch/v1/definitions.html)
* autoscaling/v1: [operations](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/autoscaling/v1/operations.html), [model definitions](https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/autoscaling/v1/definitions.html)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,494 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="generator" content="Asciidoctor 0.1.4">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Top Level API Objects</title>
<style>
/* Asciidoctor default stylesheet | MIT License | http://asciidoctor.org */
article, aside, details, figcaption, figure, footer, header, hgroup, main, nav, section, summary { display: block; }
audio, canvas, video { display: inline-block; }
audio:not([controls]) { display: none; height: 0; }
[hidden] { display: none; }
html { background: #fff; color: #000; font-family: sans-serif; -ms-text-size-adjust: 100%; -webkit-text-size-adjust: 100%; }
body { margin: 0; }
a:focus { outline: thin dotted; }
a:active, a:hover { outline: 0; }
h1 { font-size: 2em; margin: 0.67em 0; }
abbr[title] { border-bottom: 1px dotted; }
b, strong { font-weight: bold; }
dfn { font-style: italic; }
hr { -moz-box-sizing: content-box; box-sizing: content-box; height: 0; }
mark { background: #ff0; color: #000; }
code, kbd, pre, samp { font-family: monospace, serif; font-size: 1em; }
pre { white-space: pre-wrap; }
q { quotes: "\201C" "\201D" "\2018" "\2019"; }
small { font-size: 80%; }
sub, sup { font-size: 75%; line-height: 0; position: relative; vertical-align: baseline; }
sup { top: -0.5em; }
sub { bottom: -0.25em; }
img { border: 0; }
svg:not(:root) { overflow: hidden; }
figure { margin: 0; }
fieldset { border: 1px solid #c0c0c0; margin: 0 2px; padding: 0.35em 0.625em 0.75em; }
legend { border: 0; padding: 0; }
button, input, select, textarea { font-family: inherit; font-size: 100%; margin: 0; }
button, input { line-height: normal; }
button, select { text-transform: none; }
button, html input[type="button"], input[type="reset"], input[type="submit"] { -webkit-appearance: button; cursor: pointer; }
button[disabled], html input[disabled] { cursor: default; }
input[type="checkbox"], input[type="radio"] { box-sizing: border-box; padding: 0; }
input[type="search"] { -webkit-appearance: textfield; -moz-box-sizing: content-box; -webkit-box-sizing: content-box; box-sizing: content-box; }
input[type="search"]::-webkit-search-cancel-button, input[type="search"]::-webkit-search-decoration { -webkit-appearance: none; }
button::-moz-focus-inner, input::-moz-focus-inner { border: 0; padding: 0; }
textarea { overflow: auto; vertical-align: top; }
table { border-collapse: collapse; border-spacing: 0; }
*, *:before, *:after { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; }
html, body { font-size: 100%; }
body { background: white; color: #222222; padding: 0; margin: 0; font-family: "Helvetica Neue", "Helvetica", Helvetica, Arial, sans-serif; font-weight: normal; font-style: normal; line-height: 1; position: relative; cursor: auto; }
a:hover { cursor: pointer; }
a:focus { outline: none; }
img, object, embed { max-width: 100%; height: auto; }
object, embed { height: 100%; }
img { -ms-interpolation-mode: bicubic; }
#map_canvas img, #map_canvas embed, #map_canvas object, .map_canvas img, .map_canvas embed, .map_canvas object { max-width: none !important; }
.left { float: left !important; }
.right { float: right !important; }
.text-left { text-align: left !important; }
.text-right { text-align: right !important; }
.text-center { text-align: center !important; }
.text-justify { text-align: justify !important; }
.hide { display: none; }
.antialiased, body { -webkit-font-smoothing: antialiased; }
img { display: inline-block; vertical-align: middle; }
textarea { height: auto; min-height: 50px; }
select { width: 100%; }
p.lead, .paragraph.lead > p, #preamble > .sectionbody > .paragraph:first-of-type p { font-size: 1.21875em; line-height: 1.6; }
.subheader, #content #toctitle, .admonitionblock td.content > .title, .exampleblock > .title, .imageblock > .title, .videoblock > .title, .listingblock > .title, .literalblock > .title, .openblock > .title, .paragraph > .title, .quoteblock > .title, .sidebarblock > .title, .tableblock > .title, .verseblock > .title, .dlist > .title, .olist > .title, .ulist > .title, .qlist > .title, .hdlist > .title, .tableblock > caption { line-height: 1.4; color: #7a2518; font-weight: 300; margin-top: 0.2em; margin-bottom: 0.5em; }
div, dl, dt, dd, ul, ol, li, h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6, pre, form, p, blockquote, th, td { margin: 0; padding: 0; direction: ltr; }
a { color: #005498; text-decoration: underline; line-height: inherit; }
a:hover, a:focus { color: #00467f; }
a img { border: none; }
p { font-family: inherit; font-weight: normal; font-size: 1em; line-height: 1.6; margin-bottom: 1.25em; text-rendering: optimizeLegibility; }
p aside { font-size: 0.875em; line-height: 1.35; font-style: italic; }
h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6 { font-family: Georgia, "URW Bookman L", Helvetica, Arial, sans-serif; font-weight: normal; font-style: normal; color: #ba3925; text-rendering: optimizeLegibility; margin-top: 1em; margin-bottom: 0.5em; line-height: 1.2125em; }
h1 small, h2 small, h3 small, #toctitle small, .sidebarblock > .content > .title small, h4 small, h5 small, h6 small { font-size: 60%; color: #e99b8f; line-height: 0; }
h1 { font-size: 2.125em; }
h2 { font-size: 1.6875em; }
h3, #toctitle, .sidebarblock > .content > .title { font-size: 1.375em; }
h4 { font-size: 1.125em; }
h5 { font-size: 1.125em; }
h6 { font-size: 1em; }
hr { border: solid #dddddd; border-width: 1px 0 0; clear: both; margin: 1.25em 0 1.1875em; height: 0; }
em, i { font-style: italic; line-height: inherit; }
strong, b { font-weight: bold; line-height: inherit; }
small { font-size: 60%; line-height: inherit; }
code { font-family: Consolas, "Liberation Mono", Courier, monospace; font-weight: normal; color: #6d180b; }
ul, ol, dl { font-size: 1em; line-height: 1.6; margin-bottom: 1.25em; list-style-position: outside; font-family: inherit; }
ul, ol { margin-left: 1.5em; }
ul li ul, ul li ol { margin-left: 1.25em; margin-bottom: 0; font-size: 1em; }
ul.square li ul, ul.circle li ul, ul.disc li ul { list-style: inherit; }
ul.square { list-style-type: square; }
ul.circle { list-style-type: circle; }
ul.disc { list-style-type: disc; }
ul.no-bullet { list-style: none; }
ol li ul, ol li ol { margin-left: 1.25em; margin-bottom: 0; }
dl dt { margin-bottom: 0.3125em; font-weight: bold; }
dl dd { margin-bottom: 1.25em; }
abbr, acronym { text-transform: uppercase; font-size: 90%; color: #222222; border-bottom: 1px dotted #dddddd; cursor: help; }
abbr { text-transform: none; }
blockquote { margin: 0 0 1.25em; padding: 0.5625em 1.25em 0 1.1875em; border-left: 1px solid #dddddd; }
blockquote cite { display: block; font-size: inherit; color: #555555; }
blockquote cite:before { content: "\2014 \0020"; }
blockquote cite a, blockquote cite a:visited { color: #555555; }
blockquote, blockquote p { line-height: 1.6; color: #6f6f6f; }
.vcard { display: inline-block; margin: 0 0 1.25em 0; border: 1px solid #dddddd; padding: 0.625em 0.75em; }
.vcard li { margin: 0; display: block; }
.vcard .fn { font-weight: bold; font-size: 0.9375em; }
.vevent .summary { font-weight: bold; }
.vevent abbr { cursor: auto; text-decoration: none; font-weight: bold; border: none; padding: 0 0.0625em; }
@media only screen and (min-width: 768px) { h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6 { line-height: 1.4; }
h1 { font-size: 2.75em; }
h2 { font-size: 2.3125em; }
h3, #toctitle, .sidebarblock > .content > .title { font-size: 1.6875em; }
h4 { font-size: 1.4375em; } }
.print-only { display: none !important; }
@media print { * { background: transparent !important; color: #000 !important; box-shadow: none !important; text-shadow: none !important; }
a, a:visited { text-decoration: underline; }
a[href]:after { content: " (" attr(href) ")"; }
abbr[title]:after { content: " (" attr(title) ")"; }
.ir a:after, a[href^="javascript:"]:after, a[href^="#"]:after { content: ""; }
pre, blockquote { border: 1px solid #999; page-break-inside: avoid; }
thead { display: table-header-group; }
tr, img { page-break-inside: avoid; }
img { max-width: 100% !important; }
@page { margin: 0.5cm; }
p, h2, h3, #toctitle, .sidebarblock > .content > .title { orphans: 3; widows: 3; }
h2, h3, #toctitle, .sidebarblock > .content > .title { page-break-after: avoid; }
.hide-on-print { display: none !important; }
.print-only { display: block !important; }
.hide-for-print { display: none !important; }
.show-for-print { display: inherit !important; } }
table { background: white; margin-bottom: 1.25em; border: solid 1px #dddddd; }
table thead, table tfoot { background: whitesmoke; font-weight: bold; }
table thead tr th, table thead tr td, table tfoot tr th, table tfoot tr td { padding: 0.5em 0.625em 0.625em; font-size: inherit; color: #222222; text-align: left; }
table tr th, table tr td { padding: 0.5625em 0.625em; font-size: inherit; color: #222222; }
table tr.even, table tr.alt, table tr:nth-of-type(even) { background: #f9f9f9; }
table thead tr th, table tfoot tr th, table tbody tr td, table tr td, table tfoot tr td { display: table-cell; line-height: 1.6; }
.clearfix:before, .clearfix:after, .float-group:before, .float-group:after { content: " "; display: table; }
.clearfix:after, .float-group:after { clear: both; }
*:not(pre) > code { font-size: 0.9375em; padding: 1px 3px 0; white-space: nowrap; background-color: #f2f2f2; border: 1px solid #cccccc; -webkit-border-radius: 4px; border-radius: 4px; text-shadow: none; }
pre, pre > code { line-height: 1.4; color: inherit; font-family: Consolas, "Liberation Mono", Courier, monospace; font-weight: normal; }
kbd.keyseq { color: #555555; }
kbd:not(.keyseq) { display: inline-block; color: #222222; font-size: 0.75em; line-height: 1.4; background-color: #F7F7F7; border: 1px solid #ccc; -webkit-border-radius: 3px; border-radius: 3px; -webkit-box-shadow: 0 1px 0 rgba(0, 0, 0, 0.2), 0 0 0 2px white inset; box-shadow: 0 1px 0 rgba(0, 0, 0, 0.2), 0 0 0 2px white inset; margin: -0.15em 0.15em 0 0.15em; padding: 0.2em 0.6em 0.2em 0.5em; vertical-align: middle; white-space: nowrap; }
kbd kbd:first-child { margin-left: 0; }
kbd kbd:last-child { margin-right: 0; }
.menuseq, .menu { color: #090909; }
p a > code:hover { color: #561309; }
#header, #content, #footnotes, #footer { width: 100%; margin-left: auto; margin-right: auto; margin-top: 0; margin-bottom: 0; max-width: 62.5em; *zoom: 1; position: relative; padding-left: 0.9375em; padding-right: 0.9375em; }
#header:before, #header:after, #content:before, #content:after, #footnotes:before, #footnotes:after, #footer:before, #footer:after { content: " "; display: table; }
#header:after, #content:after, #footnotes:after, #footer:after { clear: both; }
#header { margin-bottom: 2.5em; }
#header > h1 { color: black; font-weight: normal; border-bottom: 1px solid #dddddd; margin-bottom: -28px; padding-bottom: 32px; }
#header span { color: #6f6f6f; }
#header #revnumber { text-transform: capitalize; }
#header br { display: none; }
#header br + span { padding-left: 3px; }
#header br + span:before { content: "\2013 \0020"; }
#header br + span.author { padding-left: 0; }
#header br + span.author:before { content: ", "; }
#toc { border-bottom: 3px double #ebebeb; padding-bottom: 1.25em; }
#toc > ul { margin-left: 0.25em; }
#toc ul.sectlevel0 > li > a { font-style: italic; }
#toc ul.sectlevel0 ul.sectlevel1 { margin-left: 0; margin-top: 0.5em; margin-bottom: 0.5em; }
#toc ul { list-style-type: none; }
#toctitle { color: #7a2518; }
@media only screen and (min-width: 1280px) { body.toc2 { padding-left: 20em; }
#toc.toc2 { position: fixed; width: 20em; left: 0; top: 0; border-right: 1px solid #ebebeb; border-bottom: 0; z-index: 1000; padding: 1em; height: 100%; overflow: auto; }
#toc.toc2 #toctitle { margin-top: 0; }
#toc.toc2 > ul { font-size: .95em; }
#toc.toc2 ul ul { margin-left: 0; padding-left: 1.25em; }
#toc.toc2 ul.sectlevel0 ul.sectlevel1 { padding-left: 0; margin-top: 0.5em; margin-bottom: 0.5em; }
body.toc2.toc-right { padding-left: 0; padding-right: 20em; }
body.toc2.toc-right #toc.toc2 { border-right: 0; border-left: 1px solid #ebebeb; left: auto; right: 0; } }
#content #toc { border-style: solid; border-width: 1px; border-color: #d9d9d9; margin-bottom: 1.25em; padding: 1.25em; background: #f2f2f2; border-width: 0; -webkit-border-radius: 4px; border-radius: 4px; }
#content #toc > :first-child { margin-top: 0; }
#content #toc > :last-child { margin-bottom: 0; }
#content #toc a { text-decoration: none; }
#content #toctitle { font-weight: bold; font-family: "Helvetica Neue", "Helvetica", Helvetica, Arial, sans-serif; font-size: 1em; padding-left: 0.125em; }
#footer { max-width: 100%; background-color: #222222; padding: 1.25em; }
#footer-text { color: #dddddd; line-height: 1.44; }
.sect1 { padding-bottom: 1.25em; }
.sect1 + .sect1 { border-top: 3px double #ebebeb; }
#content h1 > a.anchor, h2 > a.anchor, h3 > a.anchor, #toctitle > a.anchor, .sidebarblock > .content > .title > a.anchor, h4 > a.anchor, h5 > a.anchor, h6 > a.anchor { position: absolute; width: 1em; margin-left: -1em; display: block; text-decoration: none; visibility: hidden; text-align: center; font-weight: normal; }
#content h1 > a.anchor:before, h2 > a.anchor:before, h3 > a.anchor:before, #toctitle > a.anchor:before, .sidebarblock > .content > .title > a.anchor:before, h4 > a.anchor:before, h5 > a.anchor:before, h6 > a.anchor:before { content: '\00A7'; font-size: .85em; vertical-align: text-top; display: block; margin-top: 0.05em; }
#content h1:hover > a.anchor, #content h1 > a.anchor:hover, h2:hover > a.anchor, h2 > a.anchor:hover, h3:hover > a.anchor, #toctitle:hover > a.anchor, .sidebarblock > .content > .title:hover > a.anchor, h3 > a.anchor:hover, #toctitle > a.anchor:hover, .sidebarblock > .content > .title > a.anchor:hover, h4:hover > a.anchor, h4 > a.anchor:hover, h5:hover > a.anchor, h5 > a.anchor:hover, h6:hover > a.anchor, h6 > a.anchor:hover { visibility: visible; }
#content h1 > a.link, h2 > a.link, h3 > a.link, #toctitle > a.link, .sidebarblock > .content > .title > a.link, h4 > a.link, h5 > a.link, h6 > a.link { color: #ba3925; text-decoration: none; }
#content h1 > a.link:hover, h2 > a.link:hover, h3 > a.link:hover, #toctitle > a.link:hover, .sidebarblock > .content > .title > a.link:hover, h4 > a.link:hover, h5 > a.link:hover, h6 > a.link:hover { color: #a53221; }
.imageblock, .literalblock, .listingblock, .verseblock, .videoblock { margin-bottom: 1.25em; }
.admonitionblock td.content > .title, .exampleblock > .title, .imageblock > .title, .videoblock > .title, .listingblock > .title, .literalblock > .title, .openblock > .title, .paragraph > .title, .quoteblock > .title, .sidebarblock > .title, .tableblock > .title, .verseblock > .title, .dlist > .title, .olist > .title, .ulist > .title, .qlist > .title, .hdlist > .title { text-align: left; font-weight: bold; }
.tableblock > caption { text-align: left; font-weight: bold; white-space: nowrap; overflow: visible; max-width: 0; }
table.tableblock #preamble > .sectionbody > .paragraph:first-of-type p { font-size: inherit; }
.admonitionblock > table { border: 0; background: none; width: 100%; }
.admonitionblock > table td.icon { text-align: center; width: 80px; }
.admonitionblock > table td.icon img { max-width: none; }
.admonitionblock > table td.icon .title { font-weight: bold; text-transform: uppercase; }
.admonitionblock > table td.content { padding-left: 1.125em; padding-right: 1.25em; border-left: 1px solid #dddddd; color: #6f6f6f; }
.admonitionblock > table td.content > :last-child > :last-child { margin-bottom: 0; }
.exampleblock > .content { border-style: solid; border-width: 1px; border-color: #e6e6e6; margin-bottom: 1.25em; padding: 1.25em; background: white; -webkit-border-radius: 4px; border-radius: 4px; }
.exampleblock > .content > :first-child { margin-top: 0; }
.exampleblock > .content > :last-child { margin-bottom: 0; }
.exampleblock > .content h1, .exampleblock > .content h2, .exampleblock > .content h3, .exampleblock > .content #toctitle, .sidebarblock.exampleblock > .content > .title, .exampleblock > .content h4, .exampleblock > .content h5, .exampleblock > .content h6, .exampleblock > .content p { color: #333333; }
.exampleblock > .content h1, .exampleblock > .content h2, .exampleblock > .content h3, .exampleblock > .content #toctitle, .sidebarblock.exampleblock > .content > .title, .exampleblock > .content h4, .exampleblock > .content h5, .exampleblock > .content h6 { line-height: 1; margin-bottom: 0.625em; }
.exampleblock > .content h1.subheader, .exampleblock > .content h2.subheader, .exampleblock > .content h3.subheader, .exampleblock > .content .subheader#toctitle, .sidebarblock.exampleblock > .content > .subheader.title, .exampleblock > .content h4.subheader, .exampleblock > .content h5.subheader, .exampleblock > .content h6.subheader { line-height: 1.4; }
.exampleblock.result > .content { -webkit-box-shadow: 0 1px 8px #d9d9d9; box-shadow: 0 1px 8px #d9d9d9; }
.sidebarblock { border-style: solid; border-width: 1px; border-color: #d9d9d9; margin-bottom: 1.25em; padding: 1.25em; background: #f2f2f2; -webkit-border-radius: 4px; border-radius: 4px; }
.sidebarblock > :first-child { margin-top: 0; }
.sidebarblock > :last-child { margin-bottom: 0; }
.sidebarblock h1, .sidebarblock h2, .sidebarblock h3, .sidebarblock #toctitle, .sidebarblock > .content > .title, .sidebarblock h4, .sidebarblock h5, .sidebarblock h6, .sidebarblock p { color: #333333; }
.sidebarblock h1, .sidebarblock h2, .sidebarblock h3, .sidebarblock #toctitle, .sidebarblock > .content > .title, .sidebarblock h4, .sidebarblock h5, .sidebarblock h6 { line-height: 1; margin-bottom: 0.625em; }
.sidebarblock h1.subheader, .sidebarblock h2.subheader, .sidebarblock h3.subheader, .sidebarblock .subheader#toctitle, .sidebarblock > .content > .subheader.title, .sidebarblock h4.subheader, .sidebarblock h5.subheader, .sidebarblock h6.subheader { line-height: 1.4; }
.sidebarblock > .content > .title { color: #7a2518; margin-top: 0; line-height: 1.6; }
.exampleblock > .content > :last-child > :last-child, .exampleblock > .content .olist > ol > li:last-child > :last-child, .exampleblock > .content .ulist > ul > li:last-child > :last-child, .exampleblock > .content .qlist > ol > li:last-child > :last-child, .sidebarblock > .content > :last-child > :last-child, .sidebarblock > .content .olist > ol > li:last-child > :last-child, .sidebarblock > .content .ulist > ul > li:last-child > :last-child, .sidebarblock > .content .qlist > ol > li:last-child > :last-child { margin-bottom: 0; }
.literalblock > .content pre, .listingblock > .content pre { background: none; border-width: 1px 0; border-style: dotted; border-color: #bfbfbf; -webkit-border-radius: 4px; border-radius: 4px; padding: 0.75em 0.75em 0.5em 0.75em; word-wrap: break-word; }
.literalblock > .content pre.nowrap, .listingblock > .content pre.nowrap { overflow-x: auto; white-space: pre; word-wrap: normal; }
.literalblock > .content pre > code, .listingblock > .content pre > code { display: block; }
@media only screen { .literalblock > .content pre, .listingblock > .content pre { font-size: 0.8em; } }
@media only screen and (min-width: 768px) { .literalblock > .content pre, .listingblock > .content pre { font-size: 0.9em; } }
@media only screen and (min-width: 1280px) { .literalblock > .content pre, .listingblock > .content pre { font-size: 1em; } }
.listingblock > .content { position: relative; }
.listingblock:hover code[class*=" language-"]:before { text-transform: uppercase; font-size: 0.9em; color: #999; position: absolute; top: 0.375em; right: 0.375em; }
.listingblock:hover code.asciidoc:before { content: "asciidoc"; }
.listingblock:hover code.clojure:before { content: "clojure"; }
.listingblock:hover code.css:before { content: "css"; }
.listingblock:hover code.groovy:before { content: "groovy"; }
.listingblock:hover code.html:before { content: "html"; }
.listingblock:hover code.java:before { content: "java"; }
.listingblock:hover code.javascript:before { content: "javascript"; }
.listingblock:hover code.python:before { content: "python"; }
.listingblock:hover code.ruby:before { content: "ruby"; }
.listingblock:hover code.scss:before { content: "scss"; }
.listingblock:hover code.xml:before { content: "xml"; }
.listingblock:hover code.yaml:before { content: "yaml"; }
.listingblock.terminal pre .command:before { content: attr(data-prompt); padding-right: 0.5em; color: #999; }
.listingblock.terminal pre .command:not([data-prompt]):before { content: '$'; }
table.pyhltable { border: 0; margin-bottom: 0; }
table.pyhltable td { vertical-align: top; padding-top: 0; padding-bottom: 0; }
table.pyhltable td.code { padding-left: .75em; padding-right: 0; }
.highlight.pygments .lineno, table.pyhltable td:not(.code) { color: #999; padding-left: 0; padding-right: .5em; border-right: 1px solid #dddddd; }
.highlight.pygments .lineno { display: inline-block; margin-right: .25em; }
table.pyhltable .linenodiv { background-color: transparent !important; padding-right: 0 !important; }
.quoteblock { margin: 0 0 1.25em; padding: 0.5625em 1.25em 0 1.1875em; border-left: 1px solid #dddddd; }
.quoteblock blockquote { margin: 0 0 1.25em 0; padding: 0 0 0.5625em 0; border: 0; }
.quoteblock blockquote > .paragraph:last-child p { margin-bottom: 0; }
.quoteblock .attribution { margin-top: -.25em; padding-bottom: 0.5625em; font-size: inherit; color: #555555; }
.quoteblock .attribution br { display: none; }
.quoteblock .attribution cite { display: block; margin-bottom: 0.625em; }
table thead th, table tfoot th { font-weight: bold; }
table.tableblock.grid-all { border-collapse: separate; border-spacing: 1px; -webkit-border-radius: 4px; border-radius: 4px; border-top: 1px solid #dddddd; border-bottom: 1px solid #dddddd; }
table.tableblock.frame-topbot, table.tableblock.frame-none { border-left: 0; border-right: 0; }
table.tableblock.frame-sides, table.tableblock.frame-none { border-top: 0; border-bottom: 0; }
table.tableblock td .paragraph:last-child p, table.tableblock td > p:last-child { margin-bottom: 0; }
th.tableblock.halign-left, td.tableblock.halign-left { text-align: left; }
th.tableblock.halign-right, td.tableblock.halign-right { text-align: right; }
th.tableblock.halign-center, td.tableblock.halign-center { text-align: center; }
th.tableblock.valign-top, td.tableblock.valign-top { vertical-align: top; }
th.tableblock.valign-bottom, td.tableblock.valign-bottom { vertical-align: bottom; }
th.tableblock.valign-middle, td.tableblock.valign-middle { vertical-align: middle; }
p.tableblock.header { color: #222222; font-weight: bold; }
td > div.verse { white-space: pre; }
ol { margin-left: 1.75em; }
ul li ol { margin-left: 1.5em; }
dl dd { margin-left: 1.125em; }
dl dd:last-child, dl dd:last-child > :last-child { margin-bottom: 0; }
ol > li p, ul > li p, ul dd, ol dd, .olist .olist, .ulist .ulist, .ulist .olist, .olist .ulist { margin-bottom: 0.625em; }
ul.unstyled, ol.unnumbered, ul.checklist, ul.none { list-style-type: none; }
ul.unstyled, ol.unnumbered, ul.checklist { margin-left: 0.625em; }
ul.checklist li > p:first-child > i[class^="icon-check"]:first-child, ul.checklist li > p:first-child > input[type="checkbox"]:first-child { margin-right: 0.25em; }
ul.checklist li > p:first-child > input[type="checkbox"]:first-child { position: relative; top: 1px; }
ul.inline { margin: 0 auto 0.625em auto; margin-left: -1.375em; margin-right: 0; padding: 0; list-style: none; overflow: hidden; }
ul.inline > li { list-style: none; float: left; margin-left: 1.375em; display: block; }
ul.inline > li > * { display: block; }
.unstyled dl dt { font-weight: normal; font-style: normal; }
ol.arabic { list-style-type: decimal; }
ol.decimal { list-style-type: decimal-leading-zero; }
ol.loweralpha { list-style-type: lower-alpha; }
ol.upperalpha { list-style-type: upper-alpha; }
ol.lowerroman { list-style-type: lower-roman; }
ol.upperroman { list-style-type: upper-roman; }
ol.lowergreek { list-style-type: lower-greek; }
.hdlist > table, .colist > table { border: 0; background: none; }
.hdlist > table > tbody > tr, .colist > table > tbody > tr { background: none; }
td.hdlist1 { padding-right: .8em; font-weight: bold; }
td.hdlist1, td.hdlist2 { vertical-align: top; }
.literalblock + .colist, .listingblock + .colist { margin-top: -0.5em; }
.colist > table tr > td:first-of-type { padding: 0 .8em; line-height: 1; }
.colist > table tr > td:last-of-type { padding: 0.25em 0; }
.qanda > ol > li > p > em:only-child { color: #00467f; }
.thumb, .th { line-height: 0; display: inline-block; border: solid 4px white; -webkit-box-shadow: 0 0 0 1px #dddddd; box-shadow: 0 0 0 1px #dddddd; }
.imageblock.left, .imageblock[style*="float: left"] { margin: 0.25em 0.625em 1.25em 0; }
.imageblock.right, .imageblock[style*="float: right"] { margin: 0.25em 0 1.25em 0.625em; }
.imageblock > .title { margin-bottom: 0; }
.imageblock.thumb, .imageblock.th { border-width: 6px; }
.imageblock.thumb > .title, .imageblock.th > .title { padding: 0 0.125em; }
.image.left, .image.right { margin-top: 0.25em; margin-bottom: 0.25em; display: inline-block; line-height: 0; }
.image.left { margin-right: 0.625em; }
.image.right { margin-left: 0.625em; }
a.image { text-decoration: none; }
span.footnote, span.footnoteref { vertical-align: super; font-size: 0.875em; }
span.footnote a, span.footnoteref a { text-decoration: none; }
#footnotes { padding-top: 0.75em; padding-bottom: 0.75em; margin-bottom: 0.625em; }
#footnotes hr { width: 20%; min-width: 6.25em; margin: -.25em 0 .75em 0; border-width: 1px 0 0 0; }
#footnotes .footnote { padding: 0 0.375em; line-height: 1.3; font-size: 0.875em; margin-left: 1.2em; text-indent: -1.2em; margin-bottom: .2em; }
#footnotes .footnote a:first-of-type { font-weight: bold; text-decoration: none; }
#footnotes .footnote:last-of-type { margin-bottom: 0; }
#content #footnotes { margin-top: -0.625em; margin-bottom: 0; padding: 0.75em 0; }
.gist .file-data > table { border: none; background: #fff; width: 100%; margin-bottom: 0; }
.gist .file-data > table td.line-data { width: 99%; }
div.unbreakable { page-break-inside: avoid; }
.big { font-size: larger; }
.small { font-size: smaller; }
.underline { text-decoration: underline; }
.overline { text-decoration: overline; }
.line-through { text-decoration: line-through; }
.aqua { color: #00bfbf; }
.aqua-background { background-color: #00fafa; }
.black { color: black; }
.black-background { background-color: black; }
.blue { color: #0000bf; }
.blue-background { background-color: #0000fa; }
.fuchsia { color: #bf00bf; }
.fuchsia-background { background-color: #fa00fa; }
.gray { color: #606060; }
.gray-background { background-color: #7d7d7d; }
.green { color: #006000; }
.green-background { background-color: #007d00; }
.lime { color: #00bf00; }
.lime-background { background-color: #00fa00; }
.maroon { color: #600000; }
.maroon-background { background-color: #7d0000; }
.navy { color: #000060; }
.navy-background { background-color: #00007d; }
.olive { color: #606000; }
.olive-background { background-color: #7d7d00; }
.purple { color: #600060; }
.purple-background { background-color: #7d007d; }
.red { color: #bf0000; }
.red-background { background-color: #fa0000; }
.silver { color: #909090; }
.silver-background { background-color: #bcbcbc; }
.teal { color: #006060; }
.teal-background { background-color: #007d7d; }
.white { color: #bfbfbf; }
.white-background { background-color: #fafafa; }
.yellow { color: #bfbf00; }
.yellow-background { background-color: #fafa00; }
span.icon > [class^="icon-"], span.icon > [class*=" icon-"] { cursor: default; }
.admonitionblock td.icon [class^="icon-"]:before { font-size: 2.5em; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.5); cursor: default; }
.admonitionblock td.icon .icon-note:before { content: "\f05a"; color: #005498; color: #003f72; }
.admonitionblock td.icon .icon-tip:before { content: "\f0eb"; text-shadow: 1px 1px 2px rgba(155, 155, 0, 0.8); color: #111; }
.admonitionblock td.icon .icon-warning:before { content: "\f071"; color: #bf6900; }
.admonitionblock td.icon .icon-caution:before { content: "\f06d"; color: #bf3400; }
.admonitionblock td.icon .icon-important:before { content: "\f06a"; color: #bf0000; }
.conum { display: inline-block; color: white !important; background-color: #222222; -webkit-border-radius: 100px; border-radius: 100px; text-align: center; width: 20px; height: 20px; font-size: 12px; font-weight: bold; line-height: 20px; font-family: Arial, sans-serif; font-style: normal; position: relative; top: -2px; letter-spacing: -1px; }
.conum * { color: white !important; }
.conum + b { display: none; }
.conum:after { content: attr(data-value); }
.conum:not([data-value]):empty { display: none; }
.literalblock > .content > pre, .listingblock > .content > pre { -webkit-border-radius: 0; border-radius: 0; }
</style>
</head>
<body class="article">
<div id="header">
</div>
<div id="content">
<div class="sect1">
<h2 id="_top_level_api_objects">Top Level API Objects</h2>
<div class="sectionbody">
</div>
</div>
<div class="sect1">
<h2 id="_definitions">Definitions</h2>
<div class="sectionbody">
<div class="sect2">
<h3 id="_v1_apiresourcelist">v1.APIResourceList</h3>
<div class="paragraph">
<p>APIResourceList is a list of APIResource, it is used to expose the name of the resources supported in a specific group and version, and if the resource is namespaced.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: <a href="http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds">http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">apiVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: <a href="http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources">http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#resources</a></p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">groupVersion</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">groupVersion is the group and version this APIResourceList is for.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">resources</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">resources contains the name of the resources and if they are namespaced.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="#_v1_apiresource">v1.APIResource</a> array</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_v1_apiresource">v1.APIResource</h3>
<div class="paragraph">
<p>APIResource specifies the name of a resource and whether it is namespaced.</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
<col style="width:20%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">Name</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Required</th>
<th class="tableblock halign-left valign-top">Schema</th>
<th class="tableblock halign-left valign-top">Default</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">name</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">name is the name of the resource.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">namespaced</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">namespaced indicates if a resource is namespaced or not.</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">boolean</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">false</p></td>
</tr>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">kind is the kind for the resource (e.g. <em>Foo</em> is the kind for a resource <em>foo</em>)</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">true</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">string</p></td>
<td class="tableblock halign-left valign-top"></td>
</tr>
</tbody>
</table>
</div>
<div class="sect2">
<h3 id="_any">any</h3>
<div class="paragraph">
<p>Represents an untyped JSON map - see the description of the field for more info about the structure of this object.</p>
</div>
</div>
</div>
</div>
</div>
<div id="footer">
<div id="footer-text">
Last updated 2016-12-03 22:07:16 UTC
</div>
</div>
</body>
</html>

View file

@ -0,0 +1,454 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta name="generator" content="Asciidoctor 0.1.4">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Operations</title>
<style>
/* Asciidoctor default stylesheet | MIT License | http://asciidoctor.org */
article, aside, details, figcaption, figure, footer, header, hgroup, main, nav, section, summary { display: block; }
audio, canvas, video { display: inline-block; }
audio:not([controls]) { display: none; height: 0; }
[hidden] { display: none; }
html { background: #fff; color: #000; font-family: sans-serif; -ms-text-size-adjust: 100%; -webkit-text-size-adjust: 100%; }
body { margin: 0; }
a:focus { outline: thin dotted; }
a:active, a:hover { outline: 0; }
h1 { font-size: 2em; margin: 0.67em 0; }
abbr[title] { border-bottom: 1px dotted; }
b, strong { font-weight: bold; }
dfn { font-style: italic; }
hr { -moz-box-sizing: content-box; box-sizing: content-box; height: 0; }
mark { background: #ff0; color: #000; }
code, kbd, pre, samp { font-family: monospace, serif; font-size: 1em; }
pre { white-space: pre-wrap; }
q { quotes: "\201C" "\201D" "\2018" "\2019"; }
small { font-size: 80%; }
sub, sup { font-size: 75%; line-height: 0; position: relative; vertical-align: baseline; }
sup { top: -0.5em; }
sub { bottom: -0.25em; }
img { border: 0; }
svg:not(:root) { overflow: hidden; }
figure { margin: 0; }
fieldset { border: 1px solid #c0c0c0; margin: 0 2px; padding: 0.35em 0.625em 0.75em; }
legend { border: 0; padding: 0; }
button, input, select, textarea { font-family: inherit; font-size: 100%; margin: 0; }
button, input { line-height: normal; }
button, select { text-transform: none; }
button, html input[type="button"], input[type="reset"], input[type="submit"] { -webkit-appearance: button; cursor: pointer; }
button[disabled], html input[disabled] { cursor: default; }
input[type="checkbox"], input[type="radio"] { box-sizing: border-box; padding: 0; }
input[type="search"] { -webkit-appearance: textfield; -moz-box-sizing: content-box; -webkit-box-sizing: content-box; box-sizing: content-box; }
input[type="search"]::-webkit-search-cancel-button, input[type="search"]::-webkit-search-decoration { -webkit-appearance: none; }
button::-moz-focus-inner, input::-moz-focus-inner { border: 0; padding: 0; }
textarea { overflow: auto; vertical-align: top; }
table { border-collapse: collapse; border-spacing: 0; }
*, *:before, *:after { -moz-box-sizing: border-box; -webkit-box-sizing: border-box; box-sizing: border-box; }
html, body { font-size: 100%; }
body { background: white; color: #222222; padding: 0; margin: 0; font-family: "Helvetica Neue", "Helvetica", Helvetica, Arial, sans-serif; font-weight: normal; font-style: normal; line-height: 1; position: relative; cursor: auto; }
a:hover { cursor: pointer; }
a:focus { outline: none; }
img, object, embed { max-width: 100%; height: auto; }
object, embed { height: 100%; }
img { -ms-interpolation-mode: bicubic; }
#map_canvas img, #map_canvas embed, #map_canvas object, .map_canvas img, .map_canvas embed, .map_canvas object { max-width: none !important; }
.left { float: left !important; }
.right { float: right !important; }
.text-left { text-align: left !important; }
.text-right { text-align: right !important; }
.text-center { text-align: center !important; }
.text-justify { text-align: justify !important; }
.hide { display: none; }
.antialiased, body { -webkit-font-smoothing: antialiased; }
img { display: inline-block; vertical-align: middle; }
textarea { height: auto; min-height: 50px; }
select { width: 100%; }
p.lead, .paragraph.lead > p, #preamble > .sectionbody > .paragraph:first-of-type p { font-size: 1.21875em; line-height: 1.6; }
.subheader, #content #toctitle, .admonitionblock td.content > .title, .exampleblock > .title, .imageblock > .title, .videoblock > .title, .listingblock > .title, .literalblock > .title, .openblock > .title, .paragraph > .title, .quoteblock > .title, .sidebarblock > .title, .tableblock > .title, .verseblock > .title, .dlist > .title, .olist > .title, .ulist > .title, .qlist > .title, .hdlist > .title, .tableblock > caption { line-height: 1.4; color: #7a2518; font-weight: 300; margin-top: 0.2em; margin-bottom: 0.5em; }
div, dl, dt, dd, ul, ol, li, h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6, pre, form, p, blockquote, th, td { margin: 0; padding: 0; direction: ltr; }
a { color: #005498; text-decoration: underline; line-height: inherit; }
a:hover, a:focus { color: #00467f; }
a img { border: none; }
p { font-family: inherit; font-weight: normal; font-size: 1em; line-height: 1.6; margin-bottom: 1.25em; text-rendering: optimizeLegibility; }
p aside { font-size: 0.875em; line-height: 1.35; font-style: italic; }
h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6 { font-family: Georgia, "URW Bookman L", Helvetica, Arial, sans-serif; font-weight: normal; font-style: normal; color: #ba3925; text-rendering: optimizeLegibility; margin-top: 1em; margin-bottom: 0.5em; line-height: 1.2125em; }
h1 small, h2 small, h3 small, #toctitle small, .sidebarblock > .content > .title small, h4 small, h5 small, h6 small { font-size: 60%; color: #e99b8f; line-height: 0; }
h1 { font-size: 2.125em; }
h2 { font-size: 1.6875em; }
h3, #toctitle, .sidebarblock > .content > .title { font-size: 1.375em; }
h4 { font-size: 1.125em; }
h5 { font-size: 1.125em; }
h6 { font-size: 1em; }
hr { border: solid #dddddd; border-width: 1px 0 0; clear: both; margin: 1.25em 0 1.1875em; height: 0; }
em, i { font-style: italic; line-height: inherit; }
strong, b { font-weight: bold; line-height: inherit; }
small { font-size: 60%; line-height: inherit; }
code { font-family: Consolas, "Liberation Mono", Courier, monospace; font-weight: normal; color: #6d180b; }
ul, ol, dl { font-size: 1em; line-height: 1.6; margin-bottom: 1.25em; list-style-position: outside; font-family: inherit; }
ul, ol { margin-left: 1.5em; }
ul li ul, ul li ol { margin-left: 1.25em; margin-bottom: 0; font-size: 1em; }
ul.square li ul, ul.circle li ul, ul.disc li ul { list-style: inherit; }
ul.square { list-style-type: square; }
ul.circle { list-style-type: circle; }
ul.disc { list-style-type: disc; }
ul.no-bullet { list-style: none; }
ol li ul, ol li ol { margin-left: 1.25em; margin-bottom: 0; }
dl dt { margin-bottom: 0.3125em; font-weight: bold; }
dl dd { margin-bottom: 1.25em; }
abbr, acronym { text-transform: uppercase; font-size: 90%; color: #222222; border-bottom: 1px dotted #dddddd; cursor: help; }
abbr { text-transform: none; }
blockquote { margin: 0 0 1.25em; padding: 0.5625em 1.25em 0 1.1875em; border-left: 1px solid #dddddd; }
blockquote cite { display: block; font-size: inherit; color: #555555; }
blockquote cite:before { content: "\2014 \0020"; }
blockquote cite a, blockquote cite a:visited { color: #555555; }
blockquote, blockquote p { line-height: 1.6; color: #6f6f6f; }
.vcard { display: inline-block; margin: 0 0 1.25em 0; border: 1px solid #dddddd; padding: 0.625em 0.75em; }
.vcard li { margin: 0; display: block; }
.vcard .fn { font-weight: bold; font-size: 0.9375em; }
.vevent .summary { font-weight: bold; }
.vevent abbr { cursor: auto; text-decoration: none; font-weight: bold; border: none; padding: 0 0.0625em; }
@media only screen and (min-width: 768px) { h1, h2, h3, #toctitle, .sidebarblock > .content > .title, h4, h5, h6 { line-height: 1.4; }
h1 { font-size: 2.75em; }
h2 { font-size: 2.3125em; }
h3, #toctitle, .sidebarblock > .content > .title { font-size: 1.6875em; }
h4 { font-size: 1.4375em; } }
.print-only { display: none !important; }
@media print { * { background: transparent !important; color: #000 !important; box-shadow: none !important; text-shadow: none !important; }
a, a:visited { text-decoration: underline; }
a[href]:after { content: " (" attr(href) ")"; }
abbr[title]:after { content: " (" attr(title) ")"; }
.ir a:after, a[href^="javascript:"]:after, a[href^="#"]:after { content: ""; }
pre, blockquote { border: 1px solid #999; page-break-inside: avoid; }
thead { display: table-header-group; }
tr, img { page-break-inside: avoid; }
img { max-width: 100% !important; }
@page { margin: 0.5cm; }
p, h2, h3, #toctitle, .sidebarblock > .content > .title { orphans: 3; widows: 3; }
h2, h3, #toctitle, .sidebarblock > .content > .title { page-break-after: avoid; }
.hide-on-print { display: none !important; }
.print-only { display: block !important; }
.hide-for-print { display: none !important; }
.show-for-print { display: inherit !important; } }
table { background: white; margin-bottom: 1.25em; border: solid 1px #dddddd; }
table thead, table tfoot { background: whitesmoke; font-weight: bold; }
table thead tr th, table thead tr td, table tfoot tr th, table tfoot tr td { padding: 0.5em 0.625em 0.625em; font-size: inherit; color: #222222; text-align: left; }
table tr th, table tr td { padding: 0.5625em 0.625em; font-size: inherit; color: #222222; }
table tr.even, table tr.alt, table tr:nth-of-type(even) { background: #f9f9f9; }
table thead tr th, table tfoot tr th, table tbody tr td, table tr td, table tfoot tr td { display: table-cell; line-height: 1.6; }
.clearfix:before, .clearfix:after, .float-group:before, .float-group:after { content: " "; display: table; }
.clearfix:after, .float-group:after { clear: both; }
*:not(pre) > code { font-size: 0.9375em; padding: 1px 3px 0; white-space: nowrap; background-color: #f2f2f2; border: 1px solid #cccccc; -webkit-border-radius: 4px; border-radius: 4px; text-shadow: none; }
pre, pre > code { line-height: 1.4; color: inherit; font-family: Consolas, "Liberation Mono", Courier, monospace; font-weight: normal; }
kbd.keyseq { color: #555555; }
kbd:not(.keyseq) { display: inline-block; color: #222222; font-size: 0.75em; line-height: 1.4; background-color: #F7F7F7; border: 1px solid #ccc; -webkit-border-radius: 3px; border-radius: 3px; -webkit-box-shadow: 0 1px 0 rgba(0, 0, 0, 0.2), 0 0 0 2px white inset; box-shadow: 0 1px 0 rgba(0, 0, 0, 0.2), 0 0 0 2px white inset; margin: -0.15em 0.15em 0 0.15em; padding: 0.2em 0.6em 0.2em 0.5em; vertical-align: middle; white-space: nowrap; }
kbd kbd:first-child { margin-left: 0; }
kbd kbd:last-child { margin-right: 0; }
.menuseq, .menu { color: #090909; }
p a > code:hover { color: #561309; }
#header, #content, #footnotes, #footer { width: 100%; margin-left: auto; margin-right: auto; margin-top: 0; margin-bottom: 0; max-width: 62.5em; *zoom: 1; position: relative; padding-left: 0.9375em; padding-right: 0.9375em; }
#header:before, #header:after, #content:before, #content:after, #footnotes:before, #footnotes:after, #footer:before, #footer:after { content: " "; display: table; }
#header:after, #content:after, #footnotes:after, #footer:after { clear: both; }
#header { margin-bottom: 2.5em; }
#header > h1 { color: black; font-weight: normal; border-bottom: 1px solid #dddddd; margin-bottom: -28px; padding-bottom: 32px; }
#header span { color: #6f6f6f; }
#header #revnumber { text-transform: capitalize; }
#header br { display: none; }
#header br + span { padding-left: 3px; }
#header br + span:before { content: "\2013 \0020"; }
#header br + span.author { padding-left: 0; }
#header br + span.author:before { content: ", "; }
#toc { border-bottom: 3px double #ebebeb; padding-bottom: 1.25em; }
#toc > ul { margin-left: 0.25em; }
#toc ul.sectlevel0 > li > a { font-style: italic; }
#toc ul.sectlevel0 ul.sectlevel1 { margin-left: 0; margin-top: 0.5em; margin-bottom: 0.5em; }
#toc ul { list-style-type: none; }
#toctitle { color: #7a2518; }
@media only screen and (min-width: 1280px) { body.toc2 { padding-left: 20em; }
#toc.toc2 { position: fixed; width: 20em; left: 0; top: 0; border-right: 1px solid #ebebeb; border-bottom: 0; z-index: 1000; padding: 1em; height: 100%; overflow: auto; }
#toc.toc2 #toctitle { margin-top: 0; }
#toc.toc2 > ul { font-size: .95em; }
#toc.toc2 ul ul { margin-left: 0; padding-left: 1.25em; }
#toc.toc2 ul.sectlevel0 ul.sectlevel1 { padding-left: 0; margin-top: 0.5em; margin-bottom: 0.5em; }
body.toc2.toc-right { padding-left: 0; padding-right: 20em; }
body.toc2.toc-right #toc.toc2 { border-right: 0; border-left: 1px solid #ebebeb; left: auto; right: 0; } }
#content #toc { border-style: solid; border-width: 1px; border-color: #d9d9d9; margin-bottom: 1.25em; padding: 1.25em; background: #f2f2f2; border-width: 0; -webkit-border-radius: 4px; border-radius: 4px; }
#content #toc > :first-child { margin-top: 0; }
#content #toc > :last-child { margin-bottom: 0; }
#content #toc a { text-decoration: none; }
#content #toctitle { font-weight: bold; font-family: "Helvetica Neue", "Helvetica", Helvetica, Arial, sans-serif; font-size: 1em; padding-left: 0.125em; }
#footer { max-width: 100%; background-color: #222222; padding: 1.25em; }
#footer-text { color: #dddddd; line-height: 1.44; }
.sect1 { padding-bottom: 1.25em; }
.sect1 + .sect1 { border-top: 3px double #ebebeb; }
#content h1 > a.anchor, h2 > a.anchor, h3 > a.anchor, #toctitle > a.anchor, .sidebarblock > .content > .title > a.anchor, h4 > a.anchor, h5 > a.anchor, h6 > a.anchor { position: absolute; width: 1em; margin-left: -1em; display: block; text-decoration: none; visibility: hidden; text-align: center; font-weight: normal; }
#content h1 > a.anchor:before, h2 > a.anchor:before, h3 > a.anchor:before, #toctitle > a.anchor:before, .sidebarblock > .content > .title > a.anchor:before, h4 > a.anchor:before, h5 > a.anchor:before, h6 > a.anchor:before { content: '\00A7'; font-size: .85em; vertical-align: text-top; display: block; margin-top: 0.05em; }
#content h1:hover > a.anchor, #content h1 > a.anchor:hover, h2:hover > a.anchor, h2 > a.anchor:hover, h3:hover > a.anchor, #toctitle:hover > a.anchor, .sidebarblock > .content > .title:hover > a.anchor, h3 > a.anchor:hover, #toctitle > a.anchor:hover, .sidebarblock > .content > .title > a.anchor:hover, h4:hover > a.anchor, h4 > a.anchor:hover, h5:hover > a.anchor, h5 > a.anchor:hover, h6:hover > a.anchor, h6 > a.anchor:hover { visibility: visible; }
#content h1 > a.link, h2 > a.link, h3 > a.link, #toctitle > a.link, .sidebarblock > .content > .title > a.link, h4 > a.link, h5 > a.link, h6 > a.link { color: #ba3925; text-decoration: none; }
#content h1 > a.link:hover, h2 > a.link:hover, h3 > a.link:hover, #toctitle > a.link:hover, .sidebarblock > .content > .title > a.link:hover, h4 > a.link:hover, h5 > a.link:hover, h6 > a.link:hover { color: #a53221; }
.imageblock, .literalblock, .listingblock, .verseblock, .videoblock { margin-bottom: 1.25em; }
.admonitionblock td.content > .title, .exampleblock > .title, .imageblock > .title, .videoblock > .title, .listingblock > .title, .literalblock > .title, .openblock > .title, .paragraph > .title, .quoteblock > .title, .sidebarblock > .title, .tableblock > .title, .verseblock > .title, .dlist > .title, .olist > .title, .ulist > .title, .qlist > .title, .hdlist > .title { text-align: left; font-weight: bold; }
.tableblock > caption { text-align: left; font-weight: bold; white-space: nowrap; overflow: visible; max-width: 0; }
table.tableblock #preamble > .sectionbody > .paragraph:first-of-type p { font-size: inherit; }
.admonitionblock > table { border: 0; background: none; width: 100%; }
.admonitionblock > table td.icon { text-align: center; width: 80px; }
.admonitionblock > table td.icon img { max-width: none; }
.admonitionblock > table td.icon .title { font-weight: bold; text-transform: uppercase; }
.admonitionblock > table td.content { padding-left: 1.125em; padding-right: 1.25em; border-left: 1px solid #dddddd; color: #6f6f6f; }
.admonitionblock > table td.content > :last-child > :last-child { margin-bottom: 0; }
.exampleblock > .content { border-style: solid; border-width: 1px; border-color: #e6e6e6; margin-bottom: 1.25em; padding: 1.25em; background: white; -webkit-border-radius: 4px; border-radius: 4px; }
.exampleblock > .content > :first-child { margin-top: 0; }
.exampleblock > .content > :last-child { margin-bottom: 0; }
.exampleblock > .content h1, .exampleblock > .content h2, .exampleblock > .content h3, .exampleblock > .content #toctitle, .sidebarblock.exampleblock > .content > .title, .exampleblock > .content h4, .exampleblock > .content h5, .exampleblock > .content h6, .exampleblock > .content p { color: #333333; }
.exampleblock > .content h1, .exampleblock > .content h2, .exampleblock > .content h3, .exampleblock > .content #toctitle, .sidebarblock.exampleblock > .content > .title, .exampleblock > .content h4, .exampleblock > .content h5, .exampleblock > .content h6 { line-height: 1; margin-bottom: 0.625em; }
.exampleblock > .content h1.subheader, .exampleblock > .content h2.subheader, .exampleblock > .content h3.subheader, .exampleblock > .content .subheader#toctitle, .sidebarblock.exampleblock > .content > .subheader.title, .exampleblock > .content h4.subheader, .exampleblock > .content h5.subheader, .exampleblock > .content h6.subheader { line-height: 1.4; }
.exampleblock.result > .content { -webkit-box-shadow: 0 1px 8px #d9d9d9; box-shadow: 0 1px 8px #d9d9d9; }
.sidebarblock { border-style: solid; border-width: 1px; border-color: #d9d9d9; margin-bottom: 1.25em; padding: 1.25em; background: #f2f2f2; -webkit-border-radius: 4px; border-radius: 4px; }
.sidebarblock > :first-child { margin-top: 0; }
.sidebarblock > :last-child { margin-bottom: 0; }
.sidebarblock h1, .sidebarblock h2, .sidebarblock h3, .sidebarblock #toctitle, .sidebarblock > .content > .title, .sidebarblock h4, .sidebarblock h5, .sidebarblock h6, .sidebarblock p { color: #333333; }
.sidebarblock h1, .sidebarblock h2, .sidebarblock h3, .sidebarblock #toctitle, .sidebarblock > .content > .title, .sidebarblock h4, .sidebarblock h5, .sidebarblock h6 { line-height: 1; margin-bottom: 0.625em; }
.sidebarblock h1.subheader, .sidebarblock h2.subheader, .sidebarblock h3.subheader, .sidebarblock .subheader#toctitle, .sidebarblock > .content > .subheader.title, .sidebarblock h4.subheader, .sidebarblock h5.subheader, .sidebarblock h6.subheader { line-height: 1.4; }
.sidebarblock > .content > .title { color: #7a2518; margin-top: 0; line-height: 1.6; }
.exampleblock > .content > :last-child > :last-child, .exampleblock > .content .olist > ol > li:last-child > :last-child, .exampleblock > .content .ulist > ul > li:last-child > :last-child, .exampleblock > .content .qlist > ol > li:last-child > :last-child, .sidebarblock > .content > :last-child > :last-child, .sidebarblock > .content .olist > ol > li:last-child > :last-child, .sidebarblock > .content .ulist > ul > li:last-child > :last-child, .sidebarblock > .content .qlist > ol > li:last-child > :last-child { margin-bottom: 0; }
.literalblock > .content pre, .listingblock > .content pre { background: none; border-width: 1px 0; border-style: dotted; border-color: #bfbfbf; -webkit-border-radius: 4px; border-radius: 4px; padding: 0.75em 0.75em 0.5em 0.75em; word-wrap: break-word; }
.literalblock > .content pre.nowrap, .listingblock > .content pre.nowrap { overflow-x: auto; white-space: pre; word-wrap: normal; }
.literalblock > .content pre > code, .listingblock > .content pre > code { display: block; }
@media only screen { .literalblock > .content pre, .listingblock > .content pre { font-size: 0.8em; } }
@media only screen and (min-width: 768px) { .literalblock > .content pre, .listingblock > .content pre { font-size: 0.9em; } }
@media only screen and (min-width: 1280px) { .literalblock > .content pre, .listingblock > .content pre { font-size: 1em; } }
.listingblock > .content { position: relative; }
.listingblock:hover code[class*=" language-"]:before { text-transform: uppercase; font-size: 0.9em; color: #999; position: absolute; top: 0.375em; right: 0.375em; }
.listingblock:hover code.asciidoc:before { content: "asciidoc"; }
.listingblock:hover code.clojure:before { content: "clojure"; }
.listingblock:hover code.css:before { content: "css"; }
.listingblock:hover code.groovy:before { content: "groovy"; }
.listingblock:hover code.html:before { content: "html"; }
.listingblock:hover code.java:before { content: "java"; }
.listingblock:hover code.javascript:before { content: "javascript"; }
.listingblock:hover code.python:before { content: "python"; }
.listingblock:hover code.ruby:before { content: "ruby"; }
.listingblock:hover code.scss:before { content: "scss"; }
.listingblock:hover code.xml:before { content: "xml"; }
.listingblock:hover code.yaml:before { content: "yaml"; }
.listingblock.terminal pre .command:before { content: attr(data-prompt); padding-right: 0.5em; color: #999; }
.listingblock.terminal pre .command:not([data-prompt]):before { content: '$'; }
table.pyhltable { border: 0; margin-bottom: 0; }
table.pyhltable td { vertical-align: top; padding-top: 0; padding-bottom: 0; }
table.pyhltable td.code { padding-left: .75em; padding-right: 0; }
.highlight.pygments .lineno, table.pyhltable td:not(.code) { color: #999; padding-left: 0; padding-right: .5em; border-right: 1px solid #dddddd; }
.highlight.pygments .lineno { display: inline-block; margin-right: .25em; }
table.pyhltable .linenodiv { background-color: transparent !important; padding-right: 0 !important; }
.quoteblock { margin: 0 0 1.25em; padding: 0.5625em 1.25em 0 1.1875em; border-left: 1px solid #dddddd; }
.quoteblock blockquote { margin: 0 0 1.25em 0; padding: 0 0 0.5625em 0; border: 0; }
.quoteblock blockquote > .paragraph:last-child p { margin-bottom: 0; }
.quoteblock .attribution { margin-top: -.25em; padding-bottom: 0.5625em; font-size: inherit; color: #555555; }
.quoteblock .attribution br { display: none; }
.quoteblock .attribution cite { display: block; margin-bottom: 0.625em; }
table thead th, table tfoot th { font-weight: bold; }
table.tableblock.grid-all { border-collapse: separate; border-spacing: 1px; -webkit-border-radius: 4px; border-radius: 4px; border-top: 1px solid #dddddd; border-bottom: 1px solid #dddddd; }
table.tableblock.frame-topbot, table.tableblock.frame-none { border-left: 0; border-right: 0; }
table.tableblock.frame-sides, table.tableblock.frame-none { border-top: 0; border-bottom: 0; }
table.tableblock td .paragraph:last-child p, table.tableblock td > p:last-child { margin-bottom: 0; }
th.tableblock.halign-left, td.tableblock.halign-left { text-align: left; }
th.tableblock.halign-right, td.tableblock.halign-right { text-align: right; }
th.tableblock.halign-center, td.tableblock.halign-center { text-align: center; }
th.tableblock.valign-top, td.tableblock.valign-top { vertical-align: top; }
th.tableblock.valign-bottom, td.tableblock.valign-bottom { vertical-align: bottom; }
th.tableblock.valign-middle, td.tableblock.valign-middle { vertical-align: middle; }
p.tableblock.header { color: #222222; font-weight: bold; }
td > div.verse { white-space: pre; }
ol { margin-left: 1.75em; }
ul li ol { margin-left: 1.5em; }
dl dd { margin-left: 1.125em; }
dl dd:last-child, dl dd:last-child > :last-child { margin-bottom: 0; }
ol > li p, ul > li p, ul dd, ol dd, .olist .olist, .ulist .ulist, .ulist .olist, .olist .ulist { margin-bottom: 0.625em; }
ul.unstyled, ol.unnumbered, ul.checklist, ul.none { list-style-type: none; }
ul.unstyled, ol.unnumbered, ul.checklist { margin-left: 0.625em; }
ul.checklist li > p:first-child > i[class^="icon-check"]:first-child, ul.checklist li > p:first-child > input[type="checkbox"]:first-child { margin-right: 0.25em; }
ul.checklist li > p:first-child > input[type="checkbox"]:first-child { position: relative; top: 1px; }
ul.inline { margin: 0 auto 0.625em auto; margin-left: -1.375em; margin-right: 0; padding: 0; list-style: none; overflow: hidden; }
ul.inline > li { list-style: none; float: left; margin-left: 1.375em; display: block; }
ul.inline > li > * { display: block; }
.unstyled dl dt { font-weight: normal; font-style: normal; }
ol.arabic { list-style-type: decimal; }
ol.decimal { list-style-type: decimal-leading-zero; }
ol.loweralpha { list-style-type: lower-alpha; }
ol.upperalpha { list-style-type: upper-alpha; }
ol.lowerroman { list-style-type: lower-roman; }
ol.upperroman { list-style-type: upper-roman; }
ol.lowergreek { list-style-type: lower-greek; }
.hdlist > table, .colist > table { border: 0; background: none; }
.hdlist > table > tbody > tr, .colist > table > tbody > tr { background: none; }
td.hdlist1 { padding-right: .8em; font-weight: bold; }
td.hdlist1, td.hdlist2 { vertical-align: top; }
.literalblock + .colist, .listingblock + .colist { margin-top: -0.5em; }
.colist > table tr > td:first-of-type { padding: 0 .8em; line-height: 1; }
.colist > table tr > td:last-of-type { padding: 0.25em 0; }
.qanda > ol > li > p > em:only-child { color: #00467f; }
.thumb, .th { line-height: 0; display: inline-block; border: solid 4px white; -webkit-box-shadow: 0 0 0 1px #dddddd; box-shadow: 0 0 0 1px #dddddd; }
.imageblock.left, .imageblock[style*="float: left"] { margin: 0.25em 0.625em 1.25em 0; }
.imageblock.right, .imageblock[style*="float: right"] { margin: 0.25em 0 1.25em 0.625em; }
.imageblock > .title { margin-bottom: 0; }
.imageblock.thumb, .imageblock.th { border-width: 6px; }
.imageblock.thumb > .title, .imageblock.th > .title { padding: 0 0.125em; }
.image.left, .image.right { margin-top: 0.25em; margin-bottom: 0.25em; display: inline-block; line-height: 0; }
.image.left { margin-right: 0.625em; }
.image.right { margin-left: 0.625em; }
a.image { text-decoration: none; }
span.footnote, span.footnoteref { vertical-align: super; font-size: 0.875em; }
span.footnote a, span.footnoteref a { text-decoration: none; }
#footnotes { padding-top: 0.75em; padding-bottom: 0.75em; margin-bottom: 0.625em; }
#footnotes hr { width: 20%; min-width: 6.25em; margin: -.25em 0 .75em 0; border-width: 1px 0 0 0; }
#footnotes .footnote { padding: 0 0.375em; line-height: 1.3; font-size: 0.875em; margin-left: 1.2em; text-indent: -1.2em; margin-bottom: .2em; }
#footnotes .footnote a:first-of-type { font-weight: bold; text-decoration: none; }
#footnotes .footnote:last-of-type { margin-bottom: 0; }
#content #footnotes { margin-top: -0.625em; margin-bottom: 0; padding: 0.75em 0; }
.gist .file-data > table { border: none; background: #fff; width: 100%; margin-bottom: 0; }
.gist .file-data > table td.line-data { width: 99%; }
div.unbreakable { page-break-inside: avoid; }
.big { font-size: larger; }
.small { font-size: smaller; }
.underline { text-decoration: underline; }
.overline { text-decoration: overline; }
.line-through { text-decoration: line-through; }
.aqua { color: #00bfbf; }
.aqua-background { background-color: #00fafa; }
.black { color: black; }
.black-background { background-color: black; }
.blue { color: #0000bf; }
.blue-background { background-color: #0000fa; }
.fuchsia { color: #bf00bf; }
.fuchsia-background { background-color: #fa00fa; }
.gray { color: #606060; }
.gray-background { background-color: #7d7d7d; }
.green { color: #006000; }
.green-background { background-color: #007d00; }
.lime { color: #00bf00; }
.lime-background { background-color: #00fa00; }
.maroon { color: #600000; }
.maroon-background { background-color: #7d0000; }
.navy { color: #000060; }
.navy-background { background-color: #00007d; }
.olive { color: #606000; }
.olive-background { background-color: #7d7d00; }
.purple { color: #600060; }
.purple-background { background-color: #7d007d; }
.red { color: #bf0000; }
.red-background { background-color: #fa0000; }
.silver { color: #909090; }
.silver-background { background-color: #bcbcbc; }
.teal { color: #006060; }
.teal-background { background-color: #007d7d; }
.white { color: #bfbfbf; }
.white-background { background-color: #fafafa; }
.yellow { color: #bfbf00; }
.yellow-background { background-color: #fafa00; }
span.icon > [class^="icon-"], span.icon > [class*=" icon-"] { cursor: default; }
.admonitionblock td.icon [class^="icon-"]:before { font-size: 2.5em; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.5); cursor: default; }
.admonitionblock td.icon .icon-note:before { content: "\f05a"; color: #005498; color: #003f72; }
.admonitionblock td.icon .icon-tip:before { content: "\f0eb"; text-shadow: 1px 1px 2px rgba(155, 155, 0, 0.8); color: #111; }
.admonitionblock td.icon .icon-warning:before { content: "\f071"; color: #bf6900; }
.admonitionblock td.icon .icon-caution:before { content: "\f06d"; color: #bf3400; }
.admonitionblock td.icon .icon-important:before { content: "\f06a"; color: #bf0000; }
.conum { display: inline-block; color: white !important; background-color: #222222; -webkit-border-radius: 100px; border-radius: 100px; text-align: center; width: 20px; height: 20px; font-size: 12px; font-weight: bold; line-height: 20px; font-family: Arial, sans-serif; font-style: normal; position: relative; top: -2px; letter-spacing: -1px; }
.conum * { color: white !important; }
.conum + b { display: none; }
.conum:after { content: attr(data-value); }
.conum:not([data-value]):empty { display: none; }
.literalblock > .content > pre, .listingblock > .content > pre { -webkit-border-radius: 0; border-radius: 0; }
</style>
</head>
<body class="article">
<div id="header">
</div>
<div id="content">
<div class="sect1">
<h2 id="_operations">Operations</h2>
<div class="sectionbody">
<div class="sect2">
<h3 id="_get_available_resources">get available resources</h3>
<div class="listingblock">
<div class="content">
<pre>GET /apis/batch/v2alpha1</pre>
</div>
</div>
<div class="sect3">
<h4 id="_responses">Responses</h4>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>
<col style="width:33%;">
<col style="width:33%;">
<col style="width:33%;">
</colgroup>
<thead>
<tr>
<th class="tableblock halign-left valign-top">HTTP Code</th>
<th class="tableblock halign-left valign-top">Description</th>
<th class="tableblock halign-left valign-top">Schema</th>
</tr>
</thead>
<tbody>
<tr>
<td class="tableblock halign-left valign-top"><p class="tableblock">default</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock">success</p></td>
<td class="tableblock halign-left valign-top"><p class="tableblock"><a href="../definitions#_v1_apiresourcelist">v1.APIResourceList</a></p></td>
</tr>
</tbody>
</table>
</div>
<div class="sect3">
<h4 id="_consumes">Consumes</h4>
<div class="ulist">
<ul>
<li>
<p>application/json</p>
</li>
<li>
<p>application/yaml</p>
</li>
<li>
<p>application/vnd.kubernetes.protobuf</p>
</li>
</ul>
</div>
</div>
<div class="sect3">
<h4 id="_produces">Produces</h4>
<div class="ulist">
<ul>
<li>
<p>application/json</p>
</li>
<li>
<p>application/yaml</p>
</li>
<li>
<p>application/vnd.kubernetes.protobuf</p>
</li>
</ul>
</div>
</div>
<div class="sect3">
<h4 id="_tags">Tags</h4>
<div class="ulist">
<ul>
<li>
<p>apisbatchv2alpha1</p>
</li>
</ul>
</div>
</div>
</div>
</div>
</div>
</div>
<div id="footer">
<div id="footer-text">
Last updated 2016-12-03 22:07:16 UTC
</div>
</div>
</body>
</html>

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,9 @@
<!-- needed for gh-pages to render html files when imported -->
{% include <REPLACE-WITH-RELEASE-VERSION>/extensions-v1beta1-definitions.html %}
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/extensions/v1beta1/definitions.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,9 @@
<!-- needed for gh-pages to render html files when imported -->
{% include <REPLACE-WITH-RELEASE-VERSION>/extensions-v1beta1-operations.html %}
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/extensions/v1beta1/operations.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,111 @@
# Well-Known Labels, Annotations and Taints
Kubernetes reserves all labels and annotations in the kubernetes.io namespace. This document describes
the well-known kubernetes.io labels and annotations.
This document serves both as a reference to the values, and as a coordination point for assigning values.
**Table of contents:**
<!-- BEGIN MUNGE: GENERATED_TOC -->
- [Well-Known Labels, Annotations and Taints](#well-known-labels-annotations-and-taints)
- [beta.kubernetes.io/arch](#betakubernetesioarch)
- [beta.kubernetes.io/os](#betakubernetesioos)
- [kubernetes.io/hostname](#kubernetesiohostname)
- [beta.kubernetes.io/instance-type](#betakubernetesioinstance-type)
- [failure-domain.beta.kubernetes.io/region](#failure-domainbetakubernetesioregion)
- [failure-domain.beta.kubernetes.io/zone](#failure-domainbetakubernetesiozone)
<!-- END MUNGE: GENERATED_TOC -->
## beta.kubernetes.io/arch
Example: `beta.kubernetes.io/arch=amd64`
Used on: Node
Kubelet populates this with `runtime.GOARCH` as defined by Go. This can be handy if you are mixing arm and x86 nodes,
for example.
## beta.kubernetes.io/os
Example: `beta.kubernetes.io/os=linux`
Used on: Node
Kubelet populates this with `runtime.GOOS` as defined by Go. This can be handy if you are mixing operating systems
in your cluster (although currently Linux is the only OS supported by kubernetes).
## kubernetes.io/hostname
Example: `kubernetes.io/hostname=ip-172-20-114-199.ec2.internal`
Used on: Node
Kubelet populates this with the hostname. Note that the hostname can be changed from the "actual" hostname
by passing the `--hostname-override` flag to kubelet.
## beta.kubernetes.io/instance-type
Example: `beta.kubernetes.io/instance-type=m3.medium`
Used on: Node
Kubelet populates this with the instance type as defined by the `cloudprovider`. It will not be set if
not using a cloudprovider. This can be handy if you want to target certain workloads to certain instance
types, but typically you want to rely on the kubernetes scheduler to perform resource-based scheduling,
and you should aim to schedule based on properties rather than on instance types (e.g. require a GPU, instead
of requiring a `g2.2xlarge`)
## failure-domain.beta.kubernetes.io/region
See [failure-domain.beta.kubernetes.io/zone](#failure-domainbetakubernetesiozone)
## failure-domain.beta.kubernetes.io/zone
Example:
`failure-domain.beta.kubernetes.io/region=us-east-1`
`failure-domain.beta.kubernetes.io/zone=us-east-1c`
Used on: Node, PersistentVolume
On the Node: Kubelet populates this with the zone information as defined by the `cloudprovider`. It will not be set if
not using a `cloudprovider`, but you should consider setting it on the nodes if it makes sense in your topology.
On the PersistentVolume: The `PersistentVolumeLabel` admission controller will automatically add zone labels to PersistentVolumes,
on GCE and AWS.
Kubernetes will automatically spread the pods in a replication controller or service across nodes in a single-zone
cluster (to reduce the impact of failures.) With multiple-zone clusters, this spreading behaviour is extended
across zones (to reduce the impact of zone failures.) This is achieved via SelectorSpreadPriority.
This is a best-effort placement, and so if the zones in your cluster are heterogeneous (e.g. different numbers of nodes,
different types of nodes, or different pod resource requirements), this might prevent equal spreading of
your pods across zones. If desired, you can use homogenous zones (same number and types of nodes) to reduce
the probability of unequal spreading.
The scheduler (via the VolumeZonePredicate predicate) will also ensure that pods that claim a given volume
are only placed into the same zone as that volume, as volumes cannot be attached across zones.
The actual values of zone and region don't matter, and nor is the meaning of the hierarchy rigidly defined. The expectation
is that failures of nodes in different zones should be uncorrelated unless the entire region has failed. For example,
zones should typically avoid sharing a single network switch. The exact mapping depends on your particular
infrastructure - a three-rack installation will choose a very different setup to a multi-datacenter configuration.
If `PersistentVolumeLabel` does not support automatic labeling of your PersistentVolumes, you should consider
adding the labels manually (or adding support to `PersistentVolumeLabel`), if you want the scheduler to prevent
pods from mounting volumes in a different zone. If your infrastructure doesn't have this constraint, you don't
need to add the zone labels to the volumes at all.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/labels-annotations-taints.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

8620
vendor/k8s.io/kubernetes/docs/api-reference/v1/definitions.html generated vendored Executable file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,9 @@
<!-- needed for gh-pages to render html files when imported -->
{% include <REPLACE-WITH-RELEASE-VERSION>/v1-definitions.html %}
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/v1/definitions.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

33315
vendor/k8s.io/kubernetes/docs/api-reference/v1/operations.html generated vendored Executable file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,9 @@
<!-- needed for gh-pages to render html files when imported -->
{% include <REPLACE-WITH-RELEASE-VERSION>/v1-operations.html %}
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api-reference/v1/operations.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

161
vendor/k8s.io/kubernetes/docs/api.md generated vendored Normal file
View file

@ -0,0 +1,161 @@
# The Kubernetes API
Primary system and API concepts are documented in the [User guide](user-guide/README.md).
Overall API conventions are described in the [API conventions doc](devel/api-conventions.md).
Complete API details are documented via [Swagger](http://swagger.io/). The Kubernetes apiserver (aka "master") exports an API that can be used to retrieve the [Swagger spec](https://github.com/swagger-api/swagger-spec/tree/master/schemas/v1.2) for the Kubernetes API, by default at `/swaggerapi`. It also exports a UI you can use to browse the API documentation at `/swagger-ui` if the apiserver is passed --enable-swagger-ui=true flag. We also host generated [API reference docs](api-reference/README.md).
Remote access to the API is discussed in the [access doc](admin/accessing-the-api.md).
The Kubernetes API also serves as the foundation for the declarative configuration schema for the system. The [Kubectl](user-guide/kubectl/kubectl.md) command-line tool can be used to create, update, delete, and get API objects.
Kubernetes also stores its serialized state (currently in [etcd](https://coreos.com/docs/distributed-configuration/getting-started-with-etcd/)) in terms of the API resources.
Kubernetes itself is decomposed into multiple components, which interact through its API.
## Adding APIs to Kubernetes
Every API that is added to Kubernetes carries with it increased cost and complexity for all parts of the Kubernetes ecosystem. New APIs imply new code to maintain,
new tests that may flake, new documentation that users are required to understand, increased cognitive load for kubectl users and many other incremental costs.
Of course, the addition of new APIs also enables new functionality that empowers users to simply do things that may have been previously complex, costly or both.
Given this balance between increasing the complexity of the project versus the reduction of complexity in user actions, we have set out to set up a set of criteria
to guide how we as a development community decide when an API should be added to the set of core Kubernetes APIs.
The criteria for inclusion are as follows:
* Within the Kubernetes ecosystem, there is a single well known definition of such an API. As an example, `cron` has a well understood and generally accepted
specification, whereas there are countless different systems for definition workflows of dependent actions (e.g. Celery et al.).
* The API object is expected to be generally useful to greater than 50% of the Kubernetes users. This is to ensure that we don't build up a collection of niche APIs
that users rarely need.
* There is general consensus in the Kubernetes community that the API object is in the "Kubernetes layer". See ["What is Kubernetes?"](http://kubernetes.io/docs/whatisk8s/) for a detailed
explanation of what we believe the "Kubernetes layer" to be.
Of course for every set of rules, we need to ensure that we are not hamstrung or limited by slavish devotion to those rules. Thus we also introduce two exceptions
for adding APIs in Kubernetes that violate these criteria.
These exceptions are:
* There is no other way to implement the functionality in Kubernetes. We are not sure there are any examples of this anymore, but we retain this exception just in case
we have overlooked something.
* Exceptional circumstances, as judged by the Kubernetes committers and discussed in community meeting prior to inclusion of the API. We hope (expect?) that this
exception will be used rarely if at all.
## API changes
In our experience, any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, we expect the Kubernetes API to continuously change and grow. However, we intend to not break compatibility with existing clients, for an extended period of time. In general, new API resources and new resource fields can be expected to be added frequently. Elimination of resources or fields will require following a deprecation process. The precise deprecation policy for eliminating features is TBD, but once we reach our 1.0 milestone, there will be a specific policy.
What constitutes a compatible change and how to change the API are detailed by the [API change document](devel/api_changes.md).
## API versioning
To make it easier to eliminate fields or restructure resource representations, Kubernetes supports
multiple API versions, each at a different API path, such as `/api/v1` or
`/apis/extensions/v1beta1`.
We chose to version at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-lifed and/or experimental APIs.
Note that API versioning and Software versioning are only indirectly related. The [API and release
versioning proposal](design/versioning.md) describes the relationship between API versioning and
software versioning.
Different API versions imply different levels of stability and support. The criteria for each level are described
in more detail in the [API Changes documentation](devel/api_changes.md#alpha-beta-and-stable-versions). They are summarized here:
- Alpha level:
- The version names contain `alpha` (e.g. `v1alpha1`).
- May be buggy. Enabling the feature may expose bugs. Disabled by default.
- Support for feature may be dropped at any time without notice.
- The API may change in incompatible ways in a later software release without notice.
- Recommended for use only in short-lived testing clusters, due to increased risk of bugs and lack of long-term support.
- Beta level:
- The version names contain `beta` (e.g. `v2beta3`).
- Code is well tested. Enabling the feature is considered safe. Enabled by default.
- Support for the overall feature will not be dropped, though details may change.
- The schema and/or semantics of objects may change in incompatible ways in a subsequent beta or stable release. When this happens,
we will provide instructions for migrating to the next version. This may require deleting, editing, and re-creating
API objects. The editing process may require some thought. This may require downtime for applications that rely on the feature.
- Recommended for only non-business-critical uses because of potential for incompatible changes in subsequent releases. If you have
multiple clusters which can be upgraded independently, you may be able to relax this restriction.
- **Please do try our beta features and give feedback on them! Once they exit beta, it may not be practical for us to make more changes.**
- Stable level:
- The version name is `vX` where `X` is an integer.
- Stable versions of features will appear in released software for many subsequent versions.
## API groups
To make it easier to extend the Kubernetes API, we are in the process of implementing [*API
groups*](proposals/api-group.md). These are simply different interfaces to read and/or modify the
same underlying resources. The API group is specified in a REST path and in the `apiVersion` field
of a serialized object.
Currently there are two API groups in use:
1. the "core" group, which is at REST path `/api/v1` and is not specified as part of the `apiVersion` field, e.g.
`apiVersion: v1`.
1. the "extensions" group, which is at REST path `/apis/extensions/$VERSION`, and which uses
`apiVersion: extensions/$VERSION` (e.g. currently `apiVersion: extensions/v1beta1`).
This holds types which will probably move to another API group eventually.
1. the "componentconfig" and "metrics" API groups.
In the future we expect that there will be more API groups, all at REST path `/apis/$API_GROUP` and using `apiVersion: $API_GROUP/$VERSION`.
We expect that there will be a way for [third parties to create their own API groups](design/extending-api.md).
To avoid naming collisions, third-party API groups must be a DNS name at least three segments long.
New Kubernetes API groups are suffixed with `.k8s.io` (e.g. `storage.k8s.io`, `rbac.authorization.k8s.io`).
## Enabling resources in the extensions group
DaemonSets, Deployments, HorizontalPodAutoscalers, Ingress, Jobs and ReplicaSets are enabled by default.
Other extensions resources can be enabled by setting runtime-config on
apiserver. runtime-config accepts comma separated values. For ex: to disable deployments and jobs, set
`--runtime-config=extensions/v1beta1/deployments=false,extensions/v1beta1/jobs=false`
## v1beta1, v1beta2, and v1beta3 are deprecated; please move to v1 ASAP
As of June 4, 2015, the Kubernetes v1 API has been enabled by default. The v1beta1 and v1beta2 APIs were deleted on June 1, 2015. v1beta3 is planned to be deleted on July 6, 2015.
### v1 conversion tips (from v1beta3)
We're working to convert all documentation and examples to v1. Use `kubectl create --validate` in order to validate your json or yaml against our Swagger spec.
Changes to services are the most significant difference between v1beta3 and v1.
* The `service.spec.portalIP` property is renamed to `service.spec.clusterIP`.
* The `service.spec.createExternalLoadBalancer` property is removed. Specify `service.spec.type: "LoadBalancer"` to create an external load balancer instead.
* The `service.spec.publicIPs` property is deprecated and now called `service.spec.deprecatedPublicIPs`. This property will be removed entirely when v1beta3 is removed. The vast majority of users of this field were using it to expose services on ports on the node. Those users should specify `service.spec.type: "NodePort"` instead. Read [External Services](user-guide/services.md#external-services) for more info. If this is not sufficient for your use case, please file an issue or contact @thockin.
Some other difference between v1beta3 and v1:
* The `pod.spec.containers[*].privileged` and `pod.spec.containers[*].capabilities` properties are now nested under the `pod.spec.containers[*].securityContext` property. See [Security Contexts](user-guide/security-context.md).
* The `pod.spec.host` property is renamed to `pod.spec.nodeName`.
* The `endpoints.subsets[*].addresses.IP` property is renamed to `endpoints.subsets[*].addresses.ip`.
* The `pod.status.containerStatuses[*].state.termination` and `pod.status.containerStatuses[*].lastState.termination` properties are renamed to `pod.status.containerStatuses[*].state.terminated` and `pod.status.containerStatuses[*].lastState.terminated` respectively.
* The `pod.status.Condition` property is renamed to `pod.status.conditions`.
* The `status.details.id` property is renamed to `status.details.name`.
### v1beta3 conversion tips (from v1beta1/2)
Some important differences between v1beta1/2 and v1beta3:
* The resource `id` is now called `name`.
* `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata`
* `desiredState` is now called `spec`, and `currentState` is now called `status`
* `/nodes` has been moved to `/nodes`, and the resource has kind `Node`
* The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path: `/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}`. If you were not using a namespace before, use `default` here.
* The names of all resource collections are now lower cased - instead of `replicationControllers`, use `replicationcontrollers`.
* To watch for changes to a resource, open an HTTP or Websocket connection to the collection query and provide the `?watch=true` query parameter along with the desired `resourceVersion` parameter to watch from.
* The `labels` query parameter has been renamed to `labelSelector`.
* The `fields` query parameter has been renamed to `fieldSelector`.
* The container `entrypoint` has been renamed to `command`, and `command` has been renamed to `args`.
* Container, volume, and node resources are expressed as nested maps (e.g., `resources{cpu:1}`) rather than as individual fields, and resource values support [scaling suffixes](user-guide/compute-resources.md#specifying-resource-quantities) rather than fixed scales (e.g., milli-cores).
* Restart policy is represented simply as a string (e.g., `"Always"`) rather than as a nested map (`always{}`).
* Pull policies changed from `PullAlways`, `PullNever`, and `PullIfNotPresent` to `Always`, `Never`, and `IfNotPresent`.
* The volume `source` is inlined into `volume` rather than nested.
* Host volumes have been changed from `hostDir` to `hostPath` to better reflect that they can be files or directories.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/api.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

62
vendor/k8s.io/kubernetes/docs/design/README.md generated vendored Normal file
View file

@ -0,0 +1,62 @@
# Kubernetes Design Overview
Kubernetes is a system for managing containerized applications across multiple
hosts, providing basic mechanisms for deployment, maintenance, and scaling of
applications.
Kubernetes establishes robust declarative primitives for maintaining the desired
state requested by the user. We see these primitives as the main value added by
Kubernetes. Self-healing mechanisms, such as auto-restarting, re-scheduling, and
replicating containers require active controllers, not just imperative
orchestration.
Kubernetes is primarily targeted at applications composed of multiple
containers, such as elastic, distributed micro-services. It is also designed to
facilitate migration of non-containerized application stacks to Kubernetes. It
therefore includes abstractions for grouping containers in both loosely coupled
and tightly coupled formations, and provides ways for containers to find and
communicate with each other in relatively familiar ways.
Kubernetes enables users to ask a cluster to run a set of containers. The system
automatically chooses hosts to run those containers on. While Kubernetes's
scheduler is currently very simple, we expect it to grow in sophistication over
time. Scheduling is a policy-rich, topology-aware, workload-specific function
that significantly impacts availability, performance, and capacity. The
scheduler needs to take into account individual and collective resource
requirements, quality of service requirements, hardware/software/policy
constraints, affinity and anti-affinity specifications, data locality,
inter-workload interference, deadlines, and so on. Workload-specific
requirements will be exposed through the API as necessary.
Kubernetes is intended to run on a number of cloud providers, as well as on
physical hosts.
A single Kubernetes cluster is not intended to span multiple availability zones.
Instead, we recommend building a higher-level layer to replicate complete
deployments of highly available applications across multiple zones (see
[the multi-cluster doc](../admin/multi-cluster.md) and [cluster federation proposal](../proposals/federation.md)
for more details).
Finally, Kubernetes aspires to be an extensible, pluggable, building-block OSS
platform and toolkit. Therefore, architecturally, we want Kubernetes to be built
as a collection of pluggable components and layers, with the ability to use
alternative schedulers, controllers, storage systems, and distribution
mechanisms, and we're evolving its current code in that direction. Furthermore,
we want others to be able to extend Kubernetes functionality, such as with
higher-level PaaS functionality or multi-cluster layers, without modification of
core Kubernetes source. Therefore, its API isn't just (or even necessarily
mainly) targeted at end users, but at tool and extension developers. Its APIs
are intended to serve as the foundation for an open ecosystem of tools,
automation systems, and higher-level API layers. Consequently, there are no
"internal" inter-component APIs. All APIs are visible and available, including
the APIs used by the scheduler, the node controller, the replication-controller
manager, Kubelet's API, etc. There's no glass to break -- in order to handle
more complex use cases, one can just access the lower-level APIs in a fully
transparent, composable manner.
For more about the Kubernetes architecture, see [architecture](architecture.md).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

376
vendor/k8s.io/kubernetes/docs/design/access.md generated vendored Normal file
View file

@ -0,0 +1,376 @@
# K8s Identity and Access Management Sketch
This document suggests a direction for identity and access management in the
Kubernetes system.
## Background
High level goals are:
- Have a plan for how identity, authentication, and authorization will fit in
to the API.
- Have a plan for partitioning resources within a cluster between independent
organizational units.
- Ease integration with existing enterprise and hosted scenarios.
### Actors
Each of these can act as normal users or attackers.
- External Users: People who are accessing applications running on K8s (e.g.
a web site served by webserver running in a container on K8s), but who do not
have K8s API access.
- K8s Users: People who access the K8s API (e.g. create K8s API objects like
Pods)
- K8s Project Admins: People who manage access for some K8s Users
- K8s Cluster Admins: People who control the machines, networks, or binaries
that make up a K8s cluster.
- K8s Admin means K8s Cluster Admins and K8s Project Admins taken together.
### Threats
Both intentional attacks and accidental use of privilege are concerns.
For both cases it may be useful to think about these categories differently:
- Application Path - attack by sending network messages from the internet to
the IP/port of any application running on K8s. May exploit weakness in
application or misconfiguration of K8s.
- K8s API Path - attack by sending network messages to any K8s API endpoint.
- Insider Path - attack on K8s system components. Attacker may have
privileged access to networks, machines or K8s software and data. Software
errors in K8s system components and administrator error are some types of threat
in this category.
This document is primarily concerned with K8s API paths, and secondarily with
Internal paths. The Application path also needs to be secure, but is not the
focus of this document.
### Assets to protect
External User assets:
- Personal information like private messages, or images uploaded by External
Users.
- web server logs.
K8s User assets:
- External User assets of each K8s User.
- things private to the K8s app, like:
- credentials for accessing other services (docker private repos, storage
services, facebook, etc)
- SSL certificates for web servers
- proprietary data and code
K8s Cluster assets:
- Assets of each K8s User.
- Machine Certificates or secrets.
- The value of K8s cluster computing resources (cpu, memory, etc).
This document is primarily about protecting K8s User assets and K8s cluster
assets from other K8s Users and K8s Project and Cluster Admins.
### Usage environments
Cluster in Small organization:
- K8s Admins may be the same people as K8s Users.
- Few K8s Admins.
- Prefer ease of use to fine-grained access control/precise accounting, etc.
- Product requirement that it be easy for potential K8s Cluster Admin to try
out setting up a simple cluster.
Cluster in Large organization:
- K8s Admins typically distinct people from K8s Users. May need to divide
K8s Cluster Admin access by roles.
- K8s Users need to be protected from each other.
- Auditing of K8s User and K8s Admin actions important.
- Flexible accurate usage accounting and resource controls important.
- Lots of automated access to APIs.
- Need to integrate with existing enterprise directory, authentication,
accounting, auditing, and security policy infrastructure.
Org-run cluster:
- Organization that runs K8s master components is same as the org that runs
apps on K8s.
- Nodes may be on-premises VMs or physical machines; Cloud VMs; or a mix.
Hosted cluster:
- Offering K8s API as a service, or offering a Paas or Saas built on K8s.
- May already offer web services, and need to integrate with existing customer
account concept, and existing authentication, accounting, auditing, and security
policy infrastructure.
- May want to leverage K8s User accounts and accounting to manage their User
accounts (not a priority to support this use case.)
- Precise and accurate accounting of resources needed. Resource controls
needed for hard limits (Users given limited slice of data) and soft limits
(Users can grow up to some limit and then be expanded).
K8s ecosystem services:
- There may be companies that want to offer their existing services (Build, CI,
A/B-test, release automation, etc) for use with K8s. There should be some story
for this case.
Pods configs should be largely portable between Org-run and hosted
configurations.
# Design
Related discussion:
- http://issue.k8s.io/442
- http://issue.k8s.io/443
This doc describes two security profiles:
- Simple profile: like single-user mode. Make it easy to evaluate K8s
without lots of configuring accounts and policies. Protects from unauthorized
users, but does not partition authorized users.
- Enterprise profile: Provide mechanisms needed for large numbers of users.
Defense in depth. Should integrate with existing enterprise security
infrastructure.
K8s distribution should include templates of config, and documentation, for
simple and enterprise profiles. System should be flexible enough for
knowledgeable users to create intermediate profiles, but K8s developers should
only reason about those two Profiles, not a matrix.
Features in this doc are divided into "Initial Feature", and "Improvements".
Initial features would be candidates for version 1.00.
## Identity
### userAccount
K8s will have a `userAccount` API object.
- `userAccount` has a UID which is immutable. This is used to associate users
with objects and to record actions in audit logs.
- `userAccount` has a name which is a string and human readable and unique among
userAccounts. It is used to refer to users in Policies, to ensure that the
Policies are human readable. It can be changed only when there are no Policy
objects or other objects which refer to that name. An email address is a
suggested format for this field.
- `userAccount` is not related to the unix username of processes in Pods created
by that userAccount.
- `userAccount` API objects can have labels.
The system may associate one or more Authentication Methods with a
`userAccount` (but they are not formally part of the userAccount object.)
In a simple deployment, the authentication method for a user might be an
authentication token which is verified by a K8s server. In a more complex
deployment, the authentication might be delegated to another system which is
trusted by the K8s API to authenticate users, but where the authentication
details are unknown to K8s.
Initial Features:
- There is no superuser `userAccount`
- `userAccount` objects are statically populated in the K8s API store by reading
a config file. Only a K8s Cluster Admin can do this.
- `userAccount` can have a default `namespace`. If API call does not specify a
`namespace`, the default `namespace` for that caller is assumed.
- `userAccount` is global. A single human with access to multiple namespaces is
recommended to only have one userAccount.
Improvements:
- Make `userAccount` part of a separate API group from core K8s objects like
`pod.` Facilitates plugging in alternate Access Management.
Simple Profile:
- Single `userAccount`, used by all K8s Users and Project Admins. One access
token shared by all.
Enterprise Profile:
- Every human user has own `userAccount`.
- `userAccount`s have labels that indicate both membership in groups, and
ability to act in certain roles.
- Each service using the API has own `userAccount` too. (e.g. `scheduler`,
`repcontroller`)
- Automated jobs to denormalize the ldap group info into the local system
list of users into the K8s userAccount file.
### Unix accounts
A `userAccount` is not a Unix user account. The fact that a pod is started by a
`userAccount` does not mean that the processes in that pod's containers run as a
Unix user with a corresponding name or identity.
Initially:
- The unix accounts available in a container, and used by the processes running
in a container are those that are provided by the combination of the base
operating system and the Docker manifest.
- Kubernetes doesn't enforce any relation between `userAccount` and unix
accounts.
Improvements:
- Kubelet allocates disjoint blocks of root-namespace uids for each container.
This may provide some defense-in-depth against container escapes. (https://github.com/docker/docker/pull/4572)
- requires docker to integrate user namespace support, and deciding what
getpwnam() does for these uids.
- any features that help users avoid use of privileged containers
(http://issue.k8s.io/391)
### Namespaces
K8s will have a `namespace` API object. It is similar to a Google Compute
Engine `project`. It provides a namespace for objects created by a group of
people co-operating together, preventing name collisions with non-cooperating
groups. It also serves as a reference point for authorization policies.
Namespaces are described in [namespaces.md](namespaces.md).
In the Enterprise Profile:
- a `userAccount` may have permission to access several `namespace`s.
In the Simple Profile:
- There is a single `namespace` used by the single user.
Namespaces versus userAccount vs. Labels:
- `userAccount`s are intended for audit logging (both name and UID should be
logged), and to define who has access to `namespace`s.
- `labels` (see [docs/user-guide/labels.md](../../docs/user-guide/labels.md))
should be used to distinguish pods, users, and other objects that cooperate
towards a common goal but are different in some way, such as version, or
responsibilities.
- `namespace`s prevent name collisions between uncoordinated groups of people,
and provide a place to attach common policies for co-operating groups of people.
## Authentication
Goals for K8s authentication:
- Include a built-in authentication system with no configuration required to use
in single-user mode, and little configuration required to add several user
accounts, and no https proxy required.
- Allow for authentication to be handled by a system external to Kubernetes, to
allow integration with existing to enterprise authorization systems. The
Kubernetes namespace itself should avoid taking contributions of multiple
authorization schemes. Instead, a trusted proxy in front of the apiserver can be
used to authenticate users.
- For organizations whose security requirements only allow FIPS compliant
implementations (e.g. apache) for authentication.
- So the proxy can terminate SSL, and isolate the CA-signed certificate from
less trusted, higher-touch APIserver.
- For organizations that already have existing SaaS web services (e.g.
storage, VMs) and want a common authentication portal.
- Avoid mixing authentication and authorization, so that authorization policies
be centrally managed, and to allow changes in authentication methods without
affecting authorization code.
Initially:
- Tokens used to authenticate a user.
- Long lived tokens identify a particular `userAccount`.
- Administrator utility generates tokens at cluster setup.
- OAuth2.0 Bearer tokens protocol, http://tools.ietf.org/html/rfc6750
- No scopes for tokens. Authorization happens in the API server
- Tokens dynamically generated by apiserver to identify pods which are making
API calls.
- Tokens checked in a module of the APIserver.
- Authentication in apiserver can be disabled by flag, to allow testing without
authorization enabled, and to allow use of an authenticating proxy. In this
mode, a query parameter or header added by the proxy will identify the caller.
Improvements:
- Refresh of tokens.
- SSH keys to access inside containers.
To be considered for subsequent versions:
- Fuller use of OAuth (http://tools.ietf.org/html/rfc6749)
- Scoped tokens.
- Tokens that are bound to the channel between the client and the api server
- http://www.ietf.org/proceedings/90/slides/slides-90-uta-0.pdf
- http://www.browserauth.net
## Authorization
K8s authorization should:
- Allow for a range of maturity levels, from single-user for those test driving
the system, to integration with existing to enterprise authorization systems.
- Allow for centralized management of users and policies. In some
organizations, this will mean that the definition of users and access policies
needs to reside on a system other than k8s and encompass other web services
(such as a storage service).
- Allow processes running in K8s Pods to take on identity, and to allow narrow
scoping of permissions for those identities in order to limit damage from
software faults.
- Have Authorization Policies exposed as API objects so that a single config
file can create or delete Pods, Replication Controllers, Services, and the
identities and policies for those Pods and Replication Controllers.
- Be separate as much as practical from Authentication, to allow Authentication
methods to change over time and space, without impacting Authorization policies.
K8s will implement a relatively simple
[Attribute-Based Access Control](http://en.wikipedia.org/wiki/Attribute_Based_Access_Control) model.
The model will be described in more detail in a forthcoming document. The model
will:
- Be less complex than XACML
- Be easily recognizable to those familiar with Amazon IAM Policies.
- Have a subset/aliases/defaults which allow it to be used in a way comfortable
to those users more familiar with Role-Based Access Control.
Authorization policy is set by creating a set of Policy objects.
The API Server will be the Enforcement Point for Policy. For each API call that
it receives, it will construct the Attributes needed to evaluate the policy
(what user is making the call, what resource they are accessing, what they are
trying to do that resource, etc) and pass those attributes to a Decision Point.
The Decision Point code evaluates the Attributes against all the Policies and
allows or denies the API call. The system will be modular enough that the
Decision Point code can either be linked into the APIserver binary, or be
another service that the apiserver calls for each Decision (with appropriate
time-limited caching as needed for performance).
Policy objects may be applicable only to a single namespace or to all
namespaces; K8s Project Admins would be able to create those as needed. Other
Policy objects may be applicable to all namespaces; a K8s Cluster Admin might
create those in order to authorize a new type of controller to be used by all
namespaces, or to make a K8s User into a K8s Project Admin.)
## Accounting
The API should have a `quota` concept (see http://issue.k8s.io/442). A quota
object relates a namespace (and optionally a label selector) to a maximum
quantity of resources that may be used (see [resources design doc](resources.md)).
Initially:
- A `quota` object is immutable.
- For hosted K8s systems that do billing, Project is recommended level for
billing accounts.
- Every object that consumes resources should have a `namespace` so that
Resource usage stats are roll-up-able to `namespace`.
- K8s Cluster Admin sets quota objects by writing a config file.
Improvements:
- Allow one namespace to charge the quota for one or more other namespaces. This
would be controlled by a policy which allows changing a billing_namespace =
label on an object.
- Allow quota to be set by namespace owners for (namespace x label) combinations
(e.g. let "webserver" namespace use 100 cores, but to prevent accidents, don't
allow "webserver" namespace and "instance=test" use more than 10 cores.
- Tools to help write consistent quota config files based on number of nodes,
historical namespace usages, QoS needs, etc.
- Way for K8s Cluster Admin to incrementally adjust Quota objects.
Simple profile:
- A single `namespace` with infinite resource limits.
Enterprise profile:
- Multiple namespaces each with their own limits.
Issues:
- Need for locking or "eventual consistency" when multiple apiserver goroutines
are accessing the object store and handling pod creations.
## Audit Logging
API actions can be logged.
Initial implementation:
- All API calls logged to nginx logs.
Improvements:
- API server does logging instead.
- Policies to drop logging for high rate trusted API calls, or by users
performing audit or other sensitive functions.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/access.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,106 @@
# Kubernetes Proposal - Admission Control
**Related PR:**
| Topic | Link |
| ----- | ---- |
| Separate validation from RESTStorage | http://issue.k8s.io/2977 |
## Background
High level goals:
* Enable an easy-to-use mechanism to provide admission control to cluster.
* Enable a provider to support multiple admission control strategies or author
their own.
* Ensure any rejected request can propagate errors back to the caller with why
the request failed.
Authorization via policy is focused on answering if a user is authorized to
perform an action.
Admission Control is focused on if the system will accept an authorized action.
Kubernetes may choose to dismiss an authorized action based on any number of
admission control strategies.
This proposal documents the basic design, and describes how any number of
admission control plug-ins could be injected.
Implementation of specific admission control strategies are handled in separate
documents.
## kube-apiserver
The kube-apiserver takes the following OPTIONAL arguments to enable admission
control:
| Option | Behavior |
| ------ | -------- |
| admission-control | Comma-delimited, ordered list of admission control choices to invoke prior to modifying or deleting an object. |
| admission-control-config-file | File with admission control configuration parameters to boot-strap plug-in. |
An **AdmissionControl** plug-in is an implementation of the following interface:
```go
package admission
// Attributes is an interface used by a plug-in to make an admission decision
// on a individual request.
type Attributes interface {
GetNamespace() string
GetKind() string
GetOperation() string
GetObject() runtime.Object
}
// Interface is an abstract, pluggable interface for Admission Control decisions.
type Interface interface {
// Admit makes an admission decision based on the request attributes
// An error is returned if it denies the request.
Admit(a Attributes) (err error)
}
```
A **plug-in** must be compiled with the binary, and is registered as an
available option by providing a name, and implementation of admission.Interface.
```go
func init() {
admission.RegisterPlugin("AlwaysDeny", func(client client.Interface, config io.Reader) (admission.Interface, error) { return NewAlwaysDeny(), nil })
}
```
A **plug-in** must be added to the imports in [plugins.go](../../cmd/kube-apiserver/app/plugins.go)
```go
// Admission policies
_ "k8s.io/kubernetes/plugin/pkg/admission/admit"
_ "k8s.io/kubernetes/plugin/pkg/admission/alwayspullimages"
_ "k8s.io/kubernetes/plugin/pkg/admission/antiaffinity"
...
_ "<YOUR NEW PLUGIN>"
```
Invocation of admission control is handled by the **APIServer** and not
individual **RESTStorage** implementations.
This design assumes that **Issue 297** is adopted, and as a consequence, the
general framework of the APIServer request/response flow will ensure the
following:
1. Incoming request
2. Authenticate user
3. Authorize user
4. If operation=create|update|delete|connect, then admission.Admit(requestAttributes)
- invoke each admission.Interface object in sequence
5. Case on the operation:
- If operation=create|update, then validate(object) and persist
- If operation=delete, delete the object
- If operation=connect, exec
If at any step, there is an error, the request is canceled.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,233 @@
# Admission control plugin: LimitRanger
## Background
This document proposes a system for enforcing resource requirements constraints
as part of admission control.
## Use cases
1. Ability to enumerate resource requirement constraints per namespace
2. Ability to enumerate min/max resource constraints for a pod
3. Ability to enumerate min/max resource constraints for a container
4. Ability to specify default resource limits for a container
5. Ability to specify default resource requests for a container
6. Ability to enforce a ratio between request and limit for a resource.
7. Ability to enforce min/max storage requests for persistent volume claims
## Data Model
The **LimitRange** resource is scoped to a **Namespace**.
### Type
```go
// LimitType is a type of object that is limited
type LimitType string
const (
// Limit that applies to all pods in a namespace
LimitTypePod LimitType = "Pod"
// Limit that applies to all containers in a namespace
LimitTypeContainer LimitType = "Container"
)
// LimitRangeItem defines a min/max usage limit for any resource that matches
// on kind.
type LimitRangeItem struct {
// Type of resource that this limit applies to.
Type LimitType `json:"type,omitempty"`
// Max usage constraints on this kind by resource name.
Max ResourceList `json:"max,omitempty"`
// Min usage constraints on this kind by resource name.
Min ResourceList `json:"min,omitempty"`
// Default resource requirement limit value by resource name if resource limit
// is omitted.
Default ResourceList `json:"default,omitempty"`
// DefaultRequest is the default resource requirement request value by
// resource name if resource request is omitted.
DefaultRequest ResourceList `json:"defaultRequest,omitempty"`
// MaxLimitRequestRatio if specified, the named resource must have a request
// and limit that are both non-zero where limit divided by request is less
// than or equal to the enumerated value; this represents the max burst for
// the named resource.
MaxLimitRequestRatio ResourceList `json:"maxLimitRequestRatio,omitempty"`
}
// LimitRangeSpec defines a min/max usage limit for resources that match
// on kind.
type LimitRangeSpec struct {
// Limits is the list of LimitRangeItem objects that are enforced.
Limits []LimitRangeItem `json:"limits"`
}
// LimitRange sets resource usage limits for each kind of resource in a
// Namespace.
type LimitRange struct {
TypeMeta `json:",inline"`
// Standard object's metadata.
// More info:
// http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata
ObjectMeta `json:"metadata,omitempty"`
// Spec defines the limits enforced.
// More info:
// http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status
Spec LimitRangeSpec `json:"spec,omitempty"`
}
// LimitRangeList is a list of LimitRange items.
type LimitRangeList struct {
TypeMeta `json:",inline"`
// Standard list metadata.
// More info:
// http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#types-kinds
ListMeta `json:"metadata,omitempty"`
// Items is a list of LimitRange objects.
// More info:
// http://releases.k8s.io/HEAD/docs/design/admission_control_limit_range.md
Items []LimitRange `json:"items"`
}
```
### Validation
Validation of a **LimitRange** enforces that for a given named resource the
following rules apply:
Min (if specified) <= DefaultRequest (if specified) <= Default (if specified)
<= Max (if specified)
### Default Value Behavior
The following default value behaviors are applied to a LimitRange for a given
named resource.
```
if LimitRangeItem.Default[resourceName] is undefined
if LimitRangeItem.Max[resourceName] is defined
LimitRangeItem.Default[resourceName] = LimitRangeItem.Max[resourceName]
```
```
if LimitRangeItem.DefaultRequest[resourceName] is undefined
if LimitRangeItem.Default[resourceName] is defined
LimitRangeItem.DefaultRequest[resourceName] = LimitRangeItem.Default[resourceName]
else if LimitRangeItem.Min[resourceName] is defined
LimitRangeItem.DefaultRequest[resourceName] = LimitRangeItem.Min[resourceName]
```
## AdmissionControl plugin: LimitRanger
The **LimitRanger** plug-in introspects all incoming pod requests and evaluates
the constraints defined on a LimitRange.
If a constraint is not specified for an enumerated resource, it is not enforced
or tracked.
To enable the plug-in and support for LimitRange, the kube-apiserver must be
configured as follows:
```console
$ kube-apiserver --admission-control=LimitRanger
```
### Enforcement of constraints
**Type: Container**
Supported Resources:
1. memory
2. cpu
Supported Constraints:
Per container, the following must hold true:
| Constraint | Behavior |
| ---------- | -------- |
| Min | Min <= Request (required) <= Limit (optional) |
| Max | Limit (required) <= Max |
| LimitRequestRatio | LimitRequestRatio <= ( Limit (required, non-zero) / Request (required, non-zero)) |
Supported Defaults:
1. Default - if the named resource has no enumerated value, the Limit is equal
to the Default
2. DefaultRequest - if the named resource has no enumerated value, the Request
is equal to the DefaultRequest
**Type: Pod**
Supported Resources:
1. memory
2. cpu
Supported Constraints:
Across all containers in pod, the following must hold true
| Constraint | Behavior |
| ---------- | -------- |
| Min | Min <= Request (required) <= Limit (optional) |
| Max | Limit (required) <= Max |
| LimitRequestRatio | LimitRequestRatio <= ( Limit (required, non-zero) / Request (non-zero) ) |
**Type: PersistentVolumeClaim**
Supported Resources:
1. storage
Supported Constraints:
Across all claims in a namespace, the following must hold true:
| Constraint | Behavior |
| ---------- | -------- |
| Min | Min >= Request (required) |
| Max | Max <= Request (required) |
Supported Defaults: None. Storage is a required field in `PersistentVolumeClaim`, so defaults are not applied at this time.
## Run-time configuration
The default ```LimitRange``` that is applied via Salt configuration will be
updated as follows:
```
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "limits"
namespace: default
spec:
limits:
- type: "Container"
defaultRequests:
cpu: "100m"
```
## Example
An example LimitRange configuration:
| Type | Resource | Min | Max | Default | DefaultRequest | LimitRequestRatio |
| ---- | -------- | --- | --- | ------- | -------------- | ----------------- |
| Container | cpu | .1 | 1 | 500m | 250m | 4 |
| Container | memory | 250Mi | 1Gi | 500Mi | 250Mi | |
Assuming an incoming container that specified no incoming resource requirements,
the following would happen.
1. The incoming container cpu would request 250m with a limit of 500m.
2. The incoming container memory would request 250Mi with a limit of 500Mi
3. If the container is later resized, it's cpu would be constrained to between
.1 and 1 and the ratio of limit to request could not exceed 4.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_limit_range.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,215 @@
# Admission control plugin: ResourceQuota
## Background
This document describes a system for enforcing hard resource usage limits per
namespace as part of admission control.
## Use cases
1. Ability to enumerate resource usage limits per namespace.
2. Ability to monitor resource usage for tracked resources.
3. Ability to reject resource usage exceeding hard quotas.
## Data Model
The **ResourceQuota** object is scoped to a **Namespace**.
```go
// The following identify resource constants for Kubernetes object types
const (
// Pods, number
ResourcePods ResourceName = "pods"
// Services, number
ResourceServices ResourceName = "services"
// ReplicationControllers, number
ResourceReplicationControllers ResourceName = "replicationcontrollers"
// ResourceQuotas, number
ResourceQuotas ResourceName = "resourcequotas"
// ResourceSecrets, number
ResourceSecrets ResourceName = "secrets"
// ResourcePersistentVolumeClaims, number
ResourcePersistentVolumeClaims ResourceName = "persistentvolumeclaims"
)
// ResourceQuotaSpec defines the desired hard limits to enforce for Quota
type ResourceQuotaSpec struct {
// Hard is the set of desired hard limits for each named resource
Hard ResourceList `json:"hard,omitempty" description:"hard is the set of desired hard limits for each named resource; see http://releases.k8s.io/HEAD/docs/design/admission_control_resource_quota.md#admissioncontrol-plugin-resourcequota"`
}
// ResourceQuotaStatus defines the enforced hard limits and observed use
type ResourceQuotaStatus struct {
// Hard is the set of enforced hard limits for each named resource
Hard ResourceList `json:"hard,omitempty" description:"hard is the set of enforced hard limits for each named resource; see http://releases.k8s.io/HEAD/docs/design/admission_control_resource_quota.md#admissioncontrol-plugin-resourcequota"`
// Used is the current observed total usage of the resource in the namespace
Used ResourceList `json:"used,omitempty" description:"used is the current observed total usage of the resource in the namespace"`
}
// ResourceQuota sets aggregate quota restrictions enforced per namespace
type ResourceQuota struct {
TypeMeta `json:",inline"`
ObjectMeta `json:"metadata,omitempty" description:"standard object metadata; see http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"`
// Spec defines the desired quota
Spec ResourceQuotaSpec `json:"spec,omitempty" description:"spec defines the desired quota; http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status"`
// Status defines the actual enforced quota and its current usage
Status ResourceQuotaStatus `json:"status,omitempty" description:"status defines the actual enforced quota and current usage; http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status"`
}
// ResourceQuotaList is a list of ResourceQuota items
type ResourceQuotaList struct {
TypeMeta `json:",inline"`
ListMeta `json:"metadata,omitempty" description:"standard list metadata; see http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata"`
// Items is a list of ResourceQuota objects
Items []ResourceQuota `json:"items" description:"items is a list of ResourceQuota objects; see http://releases.k8s.io/HEAD/docs/design/admission_control_resource_quota.md#admissioncontrol-plugin-resourcequota"`
}
```
## Quota Tracked Resources
The following resources are supported by the quota system:
| Resource | Description |
| ------------ | ----------- |
| cpu | Total requested cpu usage |
| memory | Total requested memory usage |
| pods | Total number of active pods where phase is pending or active. |
| services | Total number of services |
| replicationcontrollers | Total number of replication controllers |
| resourcequotas | Total number of resource quotas |
| secrets | Total number of secrets |
| persistentvolumeclaims | Total number of persistent volume claims |
If a third-party wants to track additional resources, it must follow the
resource naming conventions prescribed by Kubernetes. This means the resource
must have a fully-qualified name (i.e. mycompany.org/shinynewresource)
## Resource Requirements: Requests vs. Limits
If a resource supports the ability to distinguish between a request and a limit
for a resource, the quota tracking system will only cost the request value
against the quota usage. If a resource is tracked by quota, and no request value
is provided, the associated entity is rejected as part of admission.
For an example, consider the following scenarios relative to tracking quota on
CPU:
| Pod | Container | Request CPU | Limit CPU | Result |
| --- | --------- | ----------- | --------- | ------ |
| X | C1 | 100m | 500m | The quota usage is incremented 100m |
| Y | C2 | 100m | none | The quota usage is incremented 100m |
| Y | C2 | none | 500m | The quota usage is incremented 500m since request will default to limit |
| Z | C3 | none | none | The pod is rejected since it does not enumerate a request. |
The rationale for accounting for the requested amount of a resource versus the
limit is the belief that a user should only be charged for what they are
scheduled against in the cluster. In addition, attempting to track usage against
actual usage, where request < actual < limit, is considered highly volatile.
As a consequence of this decision, the user is able to spread its usage of a
resource across multiple tiers of service. Let's demonstrate this via an
example with a 4 cpu quota.
The quota may be allocated as follows:
| Pod | Container | Request CPU | Limit CPU | Tier | Quota Usage |
| --- | --------- | ----------- | --------- | ---- | ----------- |
| X | C1 | 1 | 4 | Burstable | 1 |
| Y | C2 | 2 | 2 | Guaranteed | 2 |
| Z | C3 | 1 | 3 | Burstable | 1 |
It is possible that the pods may consume 9 cpu over a given time period
depending on the nodes available cpu that held pod X and Z, but since we
scheduled X and Z relative to the request, we only track the requesting value
against their allocated quota. If one wants to restrict the ratio between the
request and limit, it is encouraged that the user define a **LimitRange** with
**LimitRequestRatio** to control burst out behavior. This would in effect, let
an administrator keep the difference between request and limit more in line with
tracked usage if desired.
## Status API
A REST API endpoint to update the status section of the **ResourceQuota** is
exposed. It requires an atomic compare-and-swap in order to keep resource usage
tracking consistent.
## Resource Quota Controller
A resource quota controller monitors observed usage for tracked resources in the
**Namespace**.
If there is observed difference between the current usage stats versus the
current **ResourceQuota.Status**, the controller posts an update of the
currently observed usage metrics to the **ResourceQuota** via the /status
endpoint.
The resource quota controller is the only component capable of monitoring and
recording usage updates after a DELETE operation since admission control is
incapable of guaranteeing a DELETE request actually succeeded.
## AdmissionControl plugin: ResourceQuota
The **ResourceQuota** plug-in introspects all incoming admission requests.
To enable the plug-in and support for ResourceQuota, the kube-apiserver must be
configured as follows:
```
$ kube-apiserver --admission-control=ResourceQuota
```
It makes decisions by evaluating the incoming object against all defined
**ResourceQuota.Status.Hard** resource limits in the request namespace. If
acceptance of the resource would cause the total usage of a named resource to
exceed its hard limit, the request is denied.
If the incoming request does not cause the total usage to exceed any of the
enumerated hard resource limits, the plug-in will post a
**ResourceQuota.Status** document to the server to atomically update the
observed usage based on the previously read **ResourceQuota.ResourceVersion**.
This keeps incremental usage atomically consistent, but does introduce a
bottleneck (intentionally) into the system.
To optimize system performance, it is encouraged that all resource quotas are
tracked on the same **ResourceQuota** document in a **Namespace**. As a result,
it is encouraged to impose a cap on the total number of individual quotas that
are tracked in the **Namespace** to 1 in the **ResourceQuota** document.
## kubectl
kubectl is modified to support the **ResourceQuota** resource.
`kubectl describe` provides a human-readable output of quota.
For example:
```console
$ kubectl create -f test/fixtures/doc-yaml/admin/resourcequota/namespace.yaml
namespace "quota-example" created
$ kubectl create -f test/fixtures/doc-yaml/admin/resourcequota/quota.yaml --namespace=quota-example
resourcequota "quota" created
$ kubectl describe quota quota --namespace=quota-example
Name: quota
Namespace: quota-example
Resource Used Hard
-------- ---- ----
cpu 0 20
memory 0 1Gi
persistentvolumeclaims 0 10
pods 0 10
replicationcontrollers 0 20
resourcequotas 1 1
secrets 1 10
services 0 5
```
## More information
See [resource quota document](../admin/resource-quota.md) and the [example of Resource Quota](../admin/resourcequota/) for more information.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/admission_control_resource_quota.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

BIN
vendor/k8s.io/kubernetes/docs/design/architecture.dia generated vendored Normal file

Binary file not shown.

85
vendor/k8s.io/kubernetes/docs/design/architecture.md generated vendored Normal file
View file

@ -0,0 +1,85 @@
# Kubernetes architecture
A running Kubernetes cluster contains node agents (`kubelet`) and master
components (APIs, scheduler, etc), on top of a distributed storage solution.
This diagram shows our desired eventual state, though we're still working on a
few things, like making `kubelet` itself (all our components, really) run within
containers, and making the scheduler 100% pluggable.
![Architecture Diagram](architecture.png?raw=true "Architecture overview")
## The Kubernetes Node
When looking at the architecture of the system, we'll break it down to services
that run on the worker node and services that compose the cluster-level control
plane.
The Kubernetes node has the services necessary to run application containers and
be managed from the master systems.
Each node runs Docker, of course. Docker takes care of the details of
downloading images and running containers.
### `kubelet`
The `kubelet` manages [pods](../user-guide/pods.md) and their containers, their
images, their volumes, etc.
### `kube-proxy`
Each node also runs a simple network proxy and load balancer (see the
[services FAQ](https://github.com/kubernetes/kubernetes/wiki/Services-FAQ) for
more details). This reflects `services` (see
[the services doc](../user-guide/services.md) for more details) as defined in
the Kubernetes API on each node and can do simple TCP and UDP stream forwarding
(round robin) across a set of backends.
Service endpoints are currently found via [DNS](../admin/dns.md) or through
environment variables (both
[Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) and
Kubernetes `{FOO}_SERVICE_HOST` and `{FOO}_SERVICE_PORT` variables are
supported). These variables resolve to ports managed by the service proxy.
## The Kubernetes Control Plane
The Kubernetes control plane is split into a set of components. Currently they
all run on a single _master_ node, but that is expected to change soon in order
to support high-availability clusters. These components work together to provide
a unified view of the cluster.
### `etcd`
All persistent master state is stored in an instance of `etcd`. This provides a
great way to store configuration data reliably. With `watch` support,
coordinating components can be notified very quickly of changes.
### Kubernetes API Server
The apiserver serves up the [Kubernetes API](../api.md). It is intended to be a
CRUD-y server, with most/all business logic implemented in separate components
or in plug-ins. It mainly processes REST operations, validates them, and updates
the corresponding objects in `etcd` (and eventually other stores).
### Scheduler
The scheduler binds unscheduled pods to nodes via the `/binding` API. The
scheduler is pluggable, and we expect to support multiple cluster schedulers and
even user-provided schedulers in the future.
### Kubernetes Controller Manager Server
All other cluster-level functions are currently performed by the Controller
Manager. For instance, `Endpoints` objects are created and updated by the
endpoints controller, and nodes are discovered, managed, and monitored by the
node controller. These could eventually be split into separate components to
make them independently pluggable.
The [`replicationcontroller`](../user-guide/replication-controller.md) is a
mechanism that is layered on top of the simple [`pod`](../user-guide/pods.md)
API. We eventually plan to port it to a generic plug-in mechanism, once one is
implemented.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/architecture.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

BIN
vendor/k8s.io/kubernetes/docs/design/architecture.png generated vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

1943
vendor/k8s.io/kubernetes/docs/design/architecture.svg generated vendored Normal file

File diff suppressed because it is too large Load diff

After

Width:  |  Height:  |  Size: 50 KiB

View file

@ -0,0 +1,310 @@
# Peeking under the hood of Kubernetes on AWS
This document provides high-level insight into how Kubernetes works on AWS and
maps to AWS objects. We assume that you are familiar with AWS.
We encourage you to use [kube-up](../getting-started-guides/aws.md) to create
clusters on AWS. We recommend that you avoid manual configuration but are aware
that sometimes it's the only option.
Tip: You should open an issue and let us know what enhancements can be made to
the scripts to better suit your needs.
That said, it's also useful to know what's happening under the hood when
Kubernetes clusters are created on AWS. This can be particularly useful if
problems arise or in circumstances where the provided scripts are lacking and
you manually created or configured your cluster.
**Table of contents:**
* [Architecture overview](#architecture-overview)
* [Storage](#storage)
* [Auto Scaling group](#auto-scaling-group)
* [Networking](#networking)
* [NodePort and LoadBalancer services](#nodeport-and-loadbalancer-services)
* [Identity and access management (IAM)](#identity-and-access-management-iam)
* [Tagging](#tagging)
* [AWS objects](#aws-objects)
* [Manual infrastructure creation](#manual-infrastructure-creation)
* [Instance boot](#instance-boot)
### Architecture overview
Kubernetes is a cluster of several machines that consists of a Kubernetes
master and a set number of nodes (previously known as 'nodes') for which the
master is responsible. See the [Architecture](architecture.md) topic for
more details.
By default on AWS:
* Instances run Ubuntu 15.04 (the official AMI). It includes a sufficiently
modern kernel that pairs well with Docker and doesn't require a
reboot. (The default SSH user is `ubuntu` for this and other ubuntu images.)
* Nodes use aufs instead of ext4 as the filesystem / container storage (mostly
because this is what Google Compute Engine uses).
You can override these defaults by passing different environment variables to
kube-up.
### Storage
AWS supports persistent volumes by using [Elastic Block Store (EBS)](../user-guide/volumes.md#awselasticblockstore).
These can then be attached to pods that should store persistent data (e.g. if
you're running a database).
By default, nodes in AWS use [instance storage](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html)
unless you create pods with persistent volumes
[(EBS)](../user-guide/volumes.md#awselasticblockstore). In general, Kubernetes
containers do not have persistent storage unless you attach a persistent
volume, and so nodes on AWS use instance storage. Instance storage is cheaper,
often faster, and historically more reliable. Unless you can make do with
whatever space is left on your root partition, you must choose an instance type
that provides you with sufficient instance storage for your needs.
To configure Kubernetes to use EBS storage, pass the environment variable
`KUBE_AWS_STORAGE=ebs` to kube-up.
Note: The master uses a persistent volume ([etcd](architecture.md#etcd)) to
track its state. Similar to nodes, containers are mostly run against instance
storage, except that we repoint some important data onto the persistent volume.
The default storage driver for Docker images is aufs. Specifying btrfs (by
passing the environment variable `DOCKER_STORAGE=btrfs` to kube-up) is also a
good choice for a filesystem. btrfs is relatively reliable with Docker and has
improved its reliability with modern kernels. It can easily span multiple
volumes, which is particularly useful when we are using an instance type with
multiple ephemeral instance disks.
### Auto Scaling group
Nodes (but not the master) are run in an
[Auto Scaling group](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AutoScalingGroup.html)
on AWS. Currently auto-scaling (e.g. based on CPU) is not actually enabled
([#11935](http://issues.k8s.io/11935)). Instead, the Auto Scaling group means
that AWS will relaunch any nodes that are terminated.
We do not currently run the master in an AutoScalingGroup, but we should
([#11934](http://issues.k8s.io/11934)).
### Networking
Kubernetes uses an IP-per-pod model. This means that a node, which runs many
pods, must have many IPs. AWS uses virtual private clouds (VPCs) and advanced
routing support so each pod is assigned a /24 CIDR. The assigned CIDR is then
configured to route to an instance in the VPC routing table.
It is also possible to use overlay networking on AWS, but that is not the
default configuration of the kube-up script.
### NodePort and LoadBalancer services
Kubernetes on AWS integrates with [Elastic Load Balancing
(ELB)](http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/US_SetUpASLBApp.html).
When you create a service with `Type=LoadBalancer`, Kubernetes (the
kube-controller-manager) will create an ELB, create a security group for the
ELB which allows access on the service ports, attach all the nodes to the ELB,
and modify the security group for the nodes to allow traffic from the ELB to
the nodes. This traffic reaches kube-proxy where it is then forwarded to the
pods.
ELB has some restrictions:
* ELB requires that all nodes listen on a single port,
* ELB acts as a forwarding proxy (i.e. the source IP is not preserved, but see below
on ELB annotations for pods speaking HTTP).
To work with these restrictions, in Kubernetes, [LoadBalancer
services](../user-guide/services.md#type-loadbalancer) are exposed as
[NodePort services](../user-guide/services.md#type-nodeport). Then
kube-proxy listens externally on the cluster-wide port that's assigned to
NodePort services and forwards traffic to the corresponding pods.
For example, if we configure a service of Type LoadBalancer with a
public port of 80:
* Kubernetes will assign a NodePort to the service (e.g. port 31234)
* ELB is configured to proxy traffic on the public port 80 to the NodePort
assigned to the service (in this example port 31234).
* Then any in-coming traffic that ELB forwards to the NodePort (31234)
is recognized by kube-proxy and sent to the correct pods for that service.
Note that we do not automatically open NodePort services in the AWS firewall
(although we do open LoadBalancer services). This is because we expect that
NodePort services are more of a building block for things like inter-cluster
services or for LoadBalancer. To consume a NodePort service externally, you
will likely have to open the port in the node security group
(`kubernetes-node-<clusterid>`).
For SSL support, starting with 1.3 two annotations can be added to a service:
```
service.beta.kubernetes.io/aws-load-balancer-ssl-cert=arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
```
The first specifies which certificate to use. It can be either a
certificate from a third party issuer that was uploaded to IAM or one created
within AWS Certificate Manager.
```
service.beta.kubernetes.io/aws-load-balancer-backend-protocol=(https|http|ssl|tcp)
```
The second annotation specifies which protocol a pod speaks. For HTTPS and
SSL, the ELB will expect the pod to authenticate itself over the encrypted
connection.
HTTP and HTTPS will select layer 7 proxying: the ELB will terminate
the connection with the user, parse headers and inject the `X-Forwarded-For`
header with the user's IP address (pods will only see the IP address of the
ELB at the other end of its connection) when forwarding requests.
TCP and SSL will select layer 4 proxying: the ELB will forward traffic without
modifying the headers.
### Identity and Access Management (IAM)
kube-proxy sets up two IAM roles, one for the master called
[kubernetes-master](../../cluster/aws/templates/iam/kubernetes-master-policy.json)
and one for the nodes called
[kubernetes-node](../../cluster/aws/templates/iam/kubernetes-minion-policy.json).
The master is responsible for creating ELBs and configuring them, as well as
setting up advanced VPC routing. Currently it has blanket permissions on EC2,
along with rights to create and destroy ELBs.
The nodes do not need a lot of access to the AWS APIs. They need to download
a distribution file, and then are responsible for attaching and detaching EBS
volumes from itself.
The node policy is relatively minimal. In 1.2 and later, nodes can retrieve ECR
authorization tokens, refresh them every 12 hours if needed, and fetch Docker
images from it, as long as the appropriate permissions are enabled. Those in
[AmazonEC2ContainerRegistryReadOnly](http://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr_managed_policies.html#AmazonEC2ContainerRegistryReadOnly),
without write access, should suffice. The master policy is probably overly
permissive. The security conscious may want to lock-down the IAM policies
further ([#11936](http://issues.k8s.io/11936)).
We should make it easier to extend IAM permissions and also ensure that they
are correctly configured ([#14226](http://issues.k8s.io/14226)).
### Tagging
All AWS resources are tagged with a tag named "KubernetesCluster", with a value
that is the unique cluster-id. This tag is used to identify a particular
'instance' of Kubernetes, even if two clusters are deployed into the same VPC.
Resources are considered to belong to the same cluster if and only if they have
the same value in the tag named "KubernetesCluster". (The kube-up script is
not configured to create multiple clusters in the same VPC by default, but it
is possible to create another cluster in the same VPC.)
Within the AWS cloud provider logic, we filter requests to the AWS APIs to
match resources with our cluster tag. By filtering the requests, we ensure
that we see only our own AWS objects.
**Important:** If you choose not to use kube-up, you must pick a unique
cluster-id value, and ensure that all AWS resources have a tag with
`Name=KubernetesCluster,Value=<clusterid>`.
### AWS objects
The kube-up script does a number of things in AWS:
* Creates an S3 bucket (`AWS_S3_BUCKET`) and then copies the Kubernetes
distribution and the salt scripts into it. They are made world-readable and the
HTTP URLs are passed to instances; this is how Kubernetes code gets onto the
machines.
* Creates two IAM profiles based on templates in [cluster/aws/templates/iam](../../cluster/aws/templates/iam/):
* `kubernetes-master` is used by the master.
* `kubernetes-node` is used by nodes.
* Creates an AWS SSH key named `kubernetes-<fingerprint>`. Fingerprint here is
the OpenSSH key fingerprint, so that multiple users can run the script with
different keys and their keys will not collide (with near-certainty). It will
use an existing key if one is found at `AWS_SSH_KEY`, otherwise it will create
one there. (With the default Ubuntu images, if you have to SSH in: the user is
`ubuntu` and that user can `sudo`).
* Creates a VPC for use with the cluster (with a CIDR of 172.20.0.0/16) and
enables the `dns-support` and `dns-hostnames` options.
* Creates an internet gateway for the VPC.
* Creates a route table for the VPC, with the internet gateway as the default
route.
* Creates a subnet (with a CIDR of 172.20.0.0/24) in the AZ `KUBE_AWS_ZONE`
(defaults to us-west-2a). Currently, each Kubernetes cluster runs in a
single AZ on AWS. Although, there are two philosophies in discussion on how to
achieve High Availability (HA):
* cluster-per-AZ: An independent cluster for each AZ, where each cluster
is entirely separate.
* cross-AZ-clusters: A single cluster spans multiple AZs.
The debate is open here, where cluster-per-AZ is discussed as more robust but
cross-AZ-clusters are more convenient.
* Associates the subnet to the route table
* Creates security groups for the master (`kubernetes-master-<clusterid>`)
and the nodes (`kubernetes-node-<clusterid>`).
* Configures security groups so that masters and nodes can communicate. This
includes intercommunication between masters and nodes, opening SSH publicly
for both masters and nodes, and opening port 443 on the master for the HTTPS
API endpoints.
* Creates an EBS volume for the master of size `MASTER_DISK_SIZE` and type
`MASTER_DISK_TYPE`.
* Launches a master with a fixed IP address (172.20.0.9) that is also
configured for the security group and all the necessary IAM credentials. An
instance script is used to pass vital configuration information to Salt. Note:
The hope is that over time we can reduce the amount of configuration
information that must be passed in this way.
* Once the instance is up, it attaches the EBS volume and sets up a manual
routing rule for the internal network range (`MASTER_IP_RANGE`, defaults to
10.246.0.0/24).
* For auto-scaling, on each nodes it creates a launch configuration and group.
The name for both is <*KUBE_AWS_INSTANCE_PREFIX*>-node-group. The default
name is kubernetes-node-group. The auto-scaling group has a min and max size
that are both set to NUM_NODES. You can change the size of the auto-scaling
group to add or remove the total number of nodes from within the AWS API or
Console. Each nodes self-configures, meaning that they come up; run Salt with
the stored configuration; connect to the master; are assigned an internal CIDR;
and then the master configures the route-table with the assigned CIDR. The
kube-up script performs a health-check on the nodes but it's a self-check that
is not required.
If attempting this configuration manually, it is recommend to follow along
with the kube-up script, and being sure to tag everything with a tag with name
`KubernetesCluster` and value set to a unique cluster-id. Also, passing the
right configuration options to Salt when not using the script is tricky: the
plan here is to simplify this by having Kubernetes take on more node
configuration, and even potentially remove Salt altogether.
### Manual infrastructure creation
While this work is not yet complete, advanced users might choose to manually
create certain AWS objects while still making use of the kube-up script (to
configure Salt, for example). These objects can currently be manually created:
* Set the `AWS_S3_BUCKET` environment variable to use an existing S3 bucket.
* Set the `VPC_ID` environment variable to reuse an existing VPC.
* Set the `SUBNET_ID` environment variable to reuse an existing subnet.
* If your route table has a matching `KubernetesCluster` tag, it will be reused.
* If your security groups are appropriately named, they will be reused.
Currently there is no way to do the following with kube-up:
* Use an existing AWS SSH key with an arbitrary name.
* Override the IAM credentials in a sensible way
([#14226](http://issues.k8s.io/14226)).
* Use different security group permissions.
* Configure your own auto-scaling groups.
If any of the above items apply to your situation, open an issue to request an
enhancement to the kube-up script. You should provide a complete description of
the use-case, including all the details around what you want to accomplish.
### Instance boot
The instance boot procedure is currently pretty complicated, primarily because
we must marshal configuration from Bash to Salt via the AWS instance script.
As we move more post-boot configuration out of Salt and into Kubernetes, we
will hopefully be able to simplify this.
When the kube-up script launches instances, it builds an instance startup
script which includes some configuration options passed to kube-up, and
concatenates some of the scripts found in the cluster/aws/templates directory.
These scripts are responsible for mounting and formatting volumes, downloading
Salt and Kubernetes from the S3 bucket, and then triggering Salt to actually
install Kubernetes.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/aws_under_the_hood.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

128
vendor/k8s.io/kubernetes/docs/design/clustering.md generated vendored Normal file
View file

@ -0,0 +1,128 @@
# Clustering in Kubernetes
## Overview
The term "clustering" refers to the process of having all members of the
Kubernetes cluster find and trust each other. There are multiple different ways
to achieve clustering with different security and usability profiles. This
document attempts to lay out the user experiences for clustering that Kubernetes
aims to address.
Once a cluster is established, the following is true:
1. **Master -> Node** The master needs to know which nodes can take work and
what their current status is wrt capacity.
1. **Location** The master knows the name and location of all of the nodes in
the cluster.
* For the purposes of this doc, location and name should be enough
information so that the master can open a TCP connection to the Node. Most
probably we will make this either an IP address or a DNS name. It is going to be
important to be consistent here (master must be able to reach kubelet on that
DNS name) so that we can verify certificates appropriately.
2. **Target AuthN** A way to securely talk to the kubelet on that node.
Currently we call out to the kubelet over HTTP. This should be over HTTPS and
the master should know what CA to trust for that node.
3. **Caller AuthN/Z** This would be the master verifying itself (and
permissions) when calling the node. Currently, this is only used to collect
statistics as authorization isn't critical. This may change in the future
though.
2. **Node -> Master** The nodes currently talk to the master to know which pods
have been assigned to them and to publish events.
1. **Location** The nodes must know where the master is at.
2. **Target AuthN** Since the master is assigning work to the nodes, it is
critical that they verify whom they are talking to.
3. **Caller AuthN/Z** The nodes publish events and so must be authenticated to
the master. Ideally this authentication is specific to each node so that
authorization can be narrowly scoped. The details of the work to run (including
things like environment variables) might be considered sensitive and should be
locked down also.
**Note:** While the description here refers to a singular Master, in the future
we should enable multiple Masters operating in an HA mode. While the "Master" is
currently the combination of the API Server, Scheduler and Controller Manager,
we will restrict ourselves to thinking about the main API and policy engine --
the API Server.
## Current Implementation
A central authority (generally the master) is responsible for determining the
set of machines which are members of the cluster. Calls to create and remove
worker nodes in the cluster are restricted to this single authority, and any
other requests to add or remove worker nodes are rejected. (1.i.)
Communication from the master to nodes is currently over HTTP and is not secured
or authenticated in any way. (1.ii, 1.iii.)
The location of the master is communicated out of band to the nodes. For GCE,
this is done via Salt. Other cluster instructions/scripts use other methods.
(2.i.)
Currently most communication from the node to the master is over HTTP. When it
is done over HTTPS there is currently no verification of the cert of the master
(2.ii.)
Currently, the node/kubelet is authenticated to the master via a token shared
across all nodes. This token is distributed out of band (using Salt for GCE) and
is optional. If it is not present then the kubelet is unable to publish events
to the master. (2.iii.)
Our current mix of out of band communication doesn't meet all of our needs from
a security point of view and is difficult to set up and configure.
## Proposed Solution
The proposed solution will provide a range of options for setting up and
maintaining a secure Kubernetes cluster. We want to both allow for centrally
controlled systems (leveraging pre-existing trust and configuration systems) or
more ad-hoc automagic systems that are incredibly easy to set up.
The building blocks of an easier solution:
* **Move to TLS** We will move to using TLS for all intra-cluster communication.
We will explicitly identify the trust chain (the set of trusted CAs) as opposed
to trusting the system CAs. We will also use client certificates for all AuthN.
* [optional] **API driven CA** Optionally, we will run a CA in the master that
will mint certificates for the nodes/kubelets. There will be pluggable policies
that will automatically approve certificate requests here as appropriate.
* **CA approval policy** This is a pluggable policy object that can
automatically approve CA signing requests. Stock policies will include
`always-reject`, `queue` and `insecure-always-approve`. With `queue` there would
be an API for evaluating and accepting/rejecting requests. Cloud providers could
implement a policy here that verifies other out of band information and
automatically approves/rejects based on other external factors.
* **Scoped Kubelet Accounts** These accounts are per-node and (optionally) give
a node permission to register itself.
* To start with, we'd have the kubelets generate a cert/account in the form of
`kubelet:<host>`. To start we would then hard code policy such that we give that
particular account appropriate permissions. Over time, we can make the policy
engine more generic.
* [optional] **Bootstrap API endpoint** This is a helper service hosted outside
of the Kubernetes cluster that helps with initial discovery of the master.
### Static Clustering
In this sequence diagram there is out of band admin entity that is creating all
certificates and distributing them. It is also making sure that the kubelets
know where to find the master. This provides for a lot of control but is more
difficult to set up as lots of information must be communicated outside of
Kubernetes.
![Static Sequence Diagram](clustering/static.png)
### Dynamic Clustering
This diagram shows dynamic clustering using the bootstrap API endpoint. This
endpoint is used to both find the location of the master and communicate the
root CA for the master.
This flow has the admin manually approving the kubelet signing requests. This is
the `queue` policy defined above. This manual intervention could be replaced by
code that can verify the signing requests via other means.
![Dynamic Sequence Diagram](clustering/dynamic.png)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1 @@
DroidSansMono.ttf

View file

@ -0,0 +1,26 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM debian:jessie
RUN apt-get update
RUN apt-get -qy install python-seqdiag make curl
WORKDIR /diagrams
RUN curl -sLo DroidSansMono.ttf https://googlefontdirectory.googlecode.com/hg/apache/droidsansmono/DroidSansMono.ttf
ADD . /diagrams
CMD bash -c 'make >/dev/stderr && tar cf - *.png'

View file

@ -0,0 +1,41 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FONT := DroidSansMono.ttf
PNGS := $(patsubst %.seqdiag,%.png,$(wildcard *.seqdiag))
.PHONY: all
all: $(PNGS)
.PHONY: watch
watch:
fswatch *.seqdiag | xargs -n 1 sh -c "make || true"
$(FONT):
curl -sLo $@ https://googlefontdirectory.googlecode.com/hg/apache/droidsansmono/$(FONT)
%.png: %.seqdiag $(FONT)
seqdiag --no-transparency -a -f '$(FONT)' $<
# Build the stuff via a docker image
.PHONY: docker
docker:
docker build -t clustering-seqdiag .
docker run --rm clustering-seqdiag | tar xvf -
.PHONY: docker-clean
docker-clean:
docker rmi clustering-seqdiag || true
docker images -q --filter "dangling=true" | xargs docker rmi

View file

@ -0,0 +1,35 @@
This directory contains diagrams for the clustering design doc.
This depends on the `seqdiag` [utility](http://blockdiag.com/en/seqdiag/index.html).
Assuming you have a non-borked python install, this should be installable with:
```sh
pip install seqdiag
```
Just call `make` to regenerate the diagrams.
## Building with Docker
If you are on a Mac or your pip install is messed up, you can easily build with
docker:
```sh
make docker
```
The first run will be slow but things should be fast after that.
To clean up the docker containers that are created (and other cruft that is left
around) you can run `make docker-clean`.
## Automatically rebuild on file changes
If you have the fswatch utility installed, you can have it monitor the file
system and automatically rebuild when files have changed. Just do a
`make watch`.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/clustering/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

View file

@ -0,0 +1,24 @@
seqdiag {
activation = none;
user[label = "Admin User"];
bootstrap[label = "Bootstrap API\nEndpoint"];
master;
kubelet[stacked];
user -> bootstrap [label="createCluster", return="cluster ID"];
user <-- bootstrap [label="returns\n- bootstrap-cluster-uri"];
user ->> master [label="start\n- bootstrap-cluster-uri"];
master => bootstrap [label="setMaster\n- master-location\n- master-ca"];
user ->> kubelet [label="start\n- bootstrap-cluster-uri"];
kubelet => bootstrap [label="get-master", return="returns\n- master-location\n- master-ca"];
kubelet ->> master [label="signCert\n- unsigned-kubelet-cert", return="returns\n- kubelet-cert"];
user => master [label="getSignRequests"];
user => master [label="approveSignRequests"];
kubelet <<-- master [label="returns\n- kubelet-cert"];
kubelet => master [label="register\n- kubelet-location"]
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

View file

@ -0,0 +1,16 @@
seqdiag {
activation = none;
admin[label = "Manual Admin"];
ca[label = "Manual CA"]
master;
kubelet[stacked];
admin => ca [label="create\n- master-cert"];
admin ->> master [label="start\n- ca-root\n- master-cert"];
admin => ca [label="create\n- kubelet-cert"];
admin ->> kubelet [label="start\n- ca-root\n- kubelet-cert\n- master-location"];
kubelet => master [label="register\n- kubelet-location"];
}

View file

@ -0,0 +1,158 @@
# Container Command Execution & Port Forwarding in Kubernetes
## Abstract
This document describes how to use Kubernetes to execute commands in containers,
with stdin/stdout/stderr streams attached and how to implement port forwarding
to the containers.
## Background
See the following related issues/PRs:
- [Support attach](http://issue.k8s.io/1521)
- [Real container ssh](http://issue.k8s.io/1513)
- [Provide easy debug network access to services](http://issue.k8s.io/1863)
- [OpenShift container command execution proposal](https://github.com/openshift/origin/pull/576)
## Motivation
Users and administrators are accustomed to being able to access their systems
via SSH to run remote commands, get shell access, and do port forwarding.
Supporting SSH to containers in Kubernetes is a difficult task. You must
specify a "user" and a hostname to make an SSH connection, and `sshd` requires
real users (resolvable by NSS and PAM). Because a container belongs to a pod,
and the pod belongs to a namespace, you need to specify namespace/pod/container
to uniquely identify the target container. Unfortunately, a
namespace/pod/container is not a real user as far as SSH is concerned. Also,
most Linux systems limit user names to 32 characters, which is unlikely to be
large enough to contain namespace/pod/container. We could devise some scheme to
map each namespace/pod/container to a 32-character user name, adding entries to
`/etc/passwd` (or LDAP, etc.) and keeping those entries fully in sync all the
time. Alternatively, we could write custom NSS and PAM modules that allow the
host to resolve a namespace/pod/container to a user without needing to keep
files or LDAP in sync.
As an alternative to SSH, we are using a multiplexed streaming protocol that
runs on top of HTTP. There are no requirements about users being real users,
nor is there any limitation on user name length, as the protocol is under our
control. The only downside is that standard tooling that expects to use SSH
won't be able to work with this mechanism, unless adapters can be written.
## Constraints and Assumptions
- SSH support is not currently in scope.
- CGroup confinement is ultimately desired, but implementing that support is not
currently in scope.
- SELinux confinement is ultimately desired, but implementing that support is
not currently in scope.
## Use Cases
- A user of a Kubernetes cluster wants to run arbitrary commands in a
container with local stdin/stdout/stderr attached to the container.
- A user of a Kubernetes cluster wants to connect to local ports on his computer
and have them forwarded to ports in a container.
## Process Flow
### Remote Command Execution Flow
1. The client connects to the Kubernetes Master to initiate a remote command
execution request.
2. The Master proxies the request to the Kubelet where the container lives.
3. The Kubelet executes nsenter + the requested command and streams
stdin/stdout/stderr back and forth between the client and the container.
### Port Forwarding Flow
1. The client connects to the Kubernetes Master to initiate a remote command
execution request.
2. The Master proxies the request to the Kubelet where the container lives.
3. The client listens on each specified local port, awaiting local connections.
4. The client connects to one of the local listening ports.
4. The client notifies the Kubelet of the new connection.
5. The Kubelet executes nsenter + socat and streams data back and forth between
the client and the port in the container.
## Design Considerations
### Streaming Protocol
The current multiplexed streaming protocol used is SPDY. This is not the
long-term desire, however. As soon as there is viable support for HTTP/2 in Go,
we will switch to that.
### Master as First Level Proxy
Clients should not be allowed to communicate directly with the Kubelet for
security reasons. Therefore, the Master is currently the only suggested entry
point to be used for remote command execution and port forwarding. This is not
necessarily desirable, as it means that all remote command execution and port
forwarding traffic must travel through the Master, potentially impacting other
API requests.
In the future, it might make more sense to retrieve an authorization token from
the Master, and then use that token to initiate a remote command execution or
port forwarding request with a load balanced proxy service dedicated to this
functionality. This would keep the streaming traffic out of the Master.
### Kubelet as Backend Proxy
The kubelet is currently responsible for handling remote command execution and
port forwarding requests. Just like with the Master described above, this means
that all remote command execution and port forwarding streaming traffic must
travel through the Kubelet, which could result in a degraded ability to service
other requests.
In the future, it might make more sense to use a separate service on the node.
Alternatively, we could possibly inject a process into the container that only
listens for a single request, expose that process's listening port on the node,
and then issue a redirect to the client such that it would connect to the first
level proxy, which would then proxy directly to the injected process's exposed
port. This would minimize the amount of proxying that takes place.
### Scalability
There are at least 2 different ways to execute a command in a container:
`docker exec` and `nsenter`. While `docker exec` might seem like an easier and
more obvious choice, it has some drawbacks.
#### `docker exec`
We could expose `docker exec` (i.e. have Docker listen on an exposed TCP port
on the node), but this would require proxying from the edge and securing the
Docker API. `docker exec` calls go through the Docker daemon, meaning that all
stdin/stdout/stderr traffic is proxied through the Daemon, adding an extra hop.
Additionally, you can't isolate 1 malicious `docker exec` call from normal
usage, meaning an attacker could initiate a denial of service or other attack
and take down the Docker daemon, or the node itself.
We expect remote command execution and port forwarding requests to be long
running and/or high bandwidth operations, and routing all the streaming data
through the Docker daemon feels like a bottleneck we can avoid.
#### `nsenter`
The implementation currently uses `nsenter` to run commands in containers,
joining the appropriate container namespaces. `nsenter` runs directly on the
node and is not proxied through any single daemon process.
### Security
Authentication and authorization hasn't specifically been tested yet with this
functionality. We need to make sure that users are not allowed to execute
remote commands or do port forwarding to containers they aren't allowed to
access.
Additional work is required to ensure that multiple command execution or port
forwarding connections from different clients are not able to see each other's
data. This can most likely be achieved via SELinux labeling and unique process
contexts.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/command_execution_port_forwarding.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

300
vendor/k8s.io/kubernetes/docs/design/configmap.md generated vendored Normal file
View file

@ -0,0 +1,300 @@
# Generic Configuration Object
## Abstract
The `ConfigMap` API resource stores data used for the configuration of
applications deployed on Kubernetes.
The main focus of this resource is to:
* Provide dynamic distribution of configuration data to deployed applications.
* Encapsulate configuration information and simplify `Kubernetes` deployments.
* Create a flexible configuration model for `Kubernetes`.
## Motivation
A `Secret`-like API resource is needed to store configuration data that pods can
consume.
Goals of this design:
1. Describe a `ConfigMap` API resource.
2. Describe the semantics of consuming `ConfigMap` as environment variables.
3. Describe the semantics of consuming `ConfigMap` as files in a volume.
## Use Cases
1. As a user, I want to be able to consume configuration data as environment
variables.
2. As a user, I want to be able to consume configuration data as files in a
volume.
3. As a user, I want my view of configuration data in files to be eventually
consistent with changes to the data.
### Consuming `ConfigMap` as Environment Variables
A series of events for consuming `ConfigMap` as environment variables:
1. Create a `ConfigMap` object.
2. Create a pod to consume the configuration data via environment variables.
3. The pod is scheduled onto a node.
4. The Kubelet retrieves the `ConfigMap` resource(s) referenced by the pod and
starts the container processes with the appropriate configuration data from
environment variables.
### Consuming `ConfigMap` in Volumes
A series of events for consuming `ConfigMap` as configuration files in a volume:
1. Create a `ConfigMap` object.
2. Create a new pod using the `ConfigMap` via a volume plugin.
3. The pod is scheduled onto a node.
4. The Kubelet creates an instance of the volume plugin and calls its `Setup()`
method.
5. The volume plugin retrieves the `ConfigMap` resource(s) referenced by the pod
and projects the appropriate configuration data into the volume.
### Consuming `ConfigMap` Updates
Any long-running system has configuration that is mutated over time. Changes
made to configuration data must be made visible to pods consuming data in
volumes so that they can respond to those changes.
The `resourceVersion` of the `ConfigMap` object will be updated by the API
server every time the object is modified. After an update, modifications will be
made visible to the consumer container:
1. Create a `ConfigMap` object.
2. Create a new pod using the `ConfigMap` via the volume plugin.
3. The pod is scheduled onto a node.
4. During the sync loop, the Kubelet creates an instance of the volume plugin
and calls its `Setup()` method.
5. The volume plugin retrieves the `ConfigMap` resource(s) referenced by the pod
and projects the appropriate data into the volume.
6. The `ConfigMap` referenced by the pod is updated.
7. During the next iteration of the `syncLoop`, the Kubelet creates an instance
of the volume plugin and calls its `Setup()` method.
8. The volume plugin projects the updated data into the volume atomically.
It is the consuming pod's responsibility to make use of the updated data once it
is made visible.
Because environment variables cannot be updated without restarting a container,
configuration data consumed in environment variables will not be updated.
### Advantages
* Easy to consume in pods; consumer-agnostic
* Configuration data is persistent and versioned
* Consumers of configuration data in volumes can respond to changes in the data
## Proposed Design
### API Resource
The `ConfigMap` resource will be added to the main API:
```go
package api
// ConfigMap holds configuration data for pods to consume.
type ConfigMap struct {
TypeMeta `json:",inline"`
ObjectMeta `json:"metadata,omitempty"`
// Data contains the configuration data. Each key must be a valid
// DNS_SUBDOMAIN or leading dot followed by valid DNS_SUBDOMAIN.
Data map[string]string `json:"data,omitempty"`
}
type ConfigMapList struct {
TypeMeta `json:",inline"`
ListMeta `json:"metadata,omitempty"`
Items []ConfigMap `json:"items"`
}
```
A `Registry` implementation for `ConfigMap` will be added to
`pkg/registry/configmap`.
### Environment Variables
The `EnvVarSource` will be extended with a new selector for `ConfigMap`:
```go
package api
// EnvVarSource represents a source for the value of an EnvVar.
type EnvVarSource struct {
// other fields omitted
// Selects a key of a ConfigMap.
ConfigMapKeyRef *ConfigMapKeySelector `json:"configMapKeyRef,omitempty"`
}
// Selects a key from a ConfigMap.
type ConfigMapKeySelector struct {
// The ConfigMap to select from.
LocalObjectReference `json:",inline"`
// The key to select.
Key string `json:"key"`
}
```
### Volume Source
A new `ConfigMapVolumeSource` type of volume source containing the `ConfigMap`
object will be added to the `VolumeSource` struct in the API:
```go
package api
type VolumeSource struct {
// other fields omitted
ConfigMap *ConfigMapVolumeSource `json:"configMap,omitempty"`
}
// Represents a volume that holds configuration data.
type ConfigMapVolumeSource struct {
LocalObjectReference `json:",inline"`
// A list of keys to project into the volume.
// If unspecified, each key-value pair in the Data field of the
// referenced ConfigMap will be projected into the volume as a file whose name
// is the key and content is the value.
// If specified, the listed keys will be project into the specified paths, and
// unlisted keys will not be present.
Items []KeyToPath `json:"items,omitempty"`
}
// Represents a mapping of a key to a relative path.
type KeyToPath struct {
// The name of the key to select
Key string `json:"key"`
// The relative path name of the file to be created.
// Must not be absolute or contain the '..' path. Must be utf-8 encoded.
// The first item of the relative path must not start with '..'
Path string `json:"path"`
}
```
**Note:** The update logic used in the downward API volume plug-in will be
extracted and re-used in the volume plug-in for `ConfigMap`.
### Changes to Secret
We will update the Secret volume plugin to have a similar API to the new
`ConfigMap` volume plugin. The secret volume plugin will also begin updating
secret content in the volume when secrets change.
## Examples
#### Consuming `ConfigMap` as Environment Variables
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: etcd-env-config
data:
number-of-members: "1"
initial-cluster-state: new
initial-cluster-token: DUMMY_ETCD_INITIAL_CLUSTER_TOKEN
discovery-token: DUMMY_ETCD_DISCOVERY_TOKEN
discovery-url: http://etcd-discovery:2379
etcdctl-peers: http://etcd:2379
```
This pod consumes the `ConfigMap` as environment variables:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: config-env-example
spec:
containers:
- name: etcd
image: openshift/etcd-20-centos7
ports:
- containerPort: 2379
protocol: TCP
- containerPort: 2380
protocol: TCP
env:
- name: ETCD_NUM_MEMBERS
valueFrom:
configMapKeyRef:
name: etcd-env-config
key: number-of-members
- name: ETCD_INITIAL_CLUSTER_STATE
valueFrom:
configMapKeyRef:
name: etcd-env-config
key: initial-cluster-state
- name: ETCD_DISCOVERY_TOKEN
valueFrom:
configMapKeyRef:
name: etcd-env-config
key: discovery-token
- name: ETCD_DISCOVERY_URL
valueFrom:
configMapKeyRef:
name: etcd-env-config
key: discovery-url
- name: ETCDCTL_PEERS
valueFrom:
configMapKeyRef:
name: etcd-env-config
key: etcdctl-peers
```
#### Consuming `ConfigMap` as Volumes
`redis-volume-config` is intended to be used as a volume containing a config
file:
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-volume-config
data:
redis.conf: "pidfile /var/run/redis.pid\nport 6379\ntcp-backlog 511\ndatabases 1\ntimeout 0\n"
```
The following pod consumes the `redis-volume-config` in a volume:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: config-volume-example
spec:
containers:
- name: redis
image: kubernetes/redis
command: ["redis-server", "/mnt/config-map/etc/redis.conf"]
ports:
- containerPort: 6379
volumeMounts:
- name: config-map-volume
mountPath: /mnt/config-map
volumes:
- name: config-map-volume
configMap:
name: redis-volume-config
items:
- path: "etc/redis.conf"
key: redis.conf
```
## Future Improvements
In the future, we may add the ability to specify an init-container that can
watch the volume contents for updates and respond to changes when they occur.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/configmap.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,241 @@
# Kubernetes and Cluster Federation Control Plane Resilience
## Long Term Design and Current Status
### by Quinton Hoole, Mike Danese and Justin Santa-Barbara
### December 14, 2015
## Summary
Some amount of confusion exists around how we currently, and in future
want to ensure resilience of the Kubernetes (and by implication
Kubernetes Cluster Federation) control plane. This document is an attempt to capture that
definitively. It covers areas including self-healing, high
availability, bootstrapping and recovery. Most of the information in
this document already exists in the form of github comments,
PR's/proposals, scattered documents, and corridor conversations, so
document is primarily a consolidation and clarification of existing
ideas.
## Terms
* **Self-healing:** automatically restarting or replacing failed
processes and machines without human intervention
* **High availability:** continuing to be available and work correctly
even if some components are down or uncontactable. This typically
involves multiple replicas of critical services, and a reliable way
to find available replicas. Note that it's possible (but not
desirable) to have high
availability properties (e.g. multiple replicas) in the absence of
self-healing properties (e.g. if a replica fails, nothing replaces
it). Fairly obviously, given enough time, such systems typically
become unavailable (after enough replicas have failed).
* **Bootstrapping**: creating an empty cluster from nothing
* **Recovery**: recreating a non-empty cluster after perhaps
catastrophic failure/unavailability/data corruption
## Overall Goals
1. **Resilience to single failures:** Kubernetes clusters constrained
to single availability zones should be resilient to individual
machine and process failures by being both self-healing and highly
available (within the context of such individual failures).
1. **Ubiquitous resilience by default:** The default cluster creation
scripts for (at least) GCE, AWS and basic bare metal should adhere
to the above (self-healing and high availability) by default (with
options available to disable these features to reduce control plane
resource requirements if so required). It is hoped that other
cloud providers will also follow the above guidelines, but the
above 3 are the primary canonical use cases.
1. **Resilience to some correlated failures:** Kubernetes clusters
which span multiple availability zones in a region should by
default be resilient to complete failure of one entire availability
zone (by similarly providing self-healing and high availability in
the default cluster creation scripts as above).
1. **Default implementation shared across cloud providers:** The
differences between the default implementations of the above for
GCE, AWS and basic bare metal should be minimized. This implies
using shared libraries across these providers in the default
scripts in preference to highly customized implementations per
cloud provider. This is not to say that highly differentiated,
customized per-cloud cluster creation processes (e.g. for GKE on
GCE, or some hosted Kubernetes provider on AWS) are discouraged.
But those fall squarely outside the basic cross-platform OSS
Kubernetes distro.
1. **Self-hosting:** Where possible, Kubernetes's existing mechanisms
for achieving system resilience (replication controllers, health
checking, service load balancing etc) should be used in preference
to building a separate set of mechanisms to achieve the same thing.
This implies that self hosting (the kubernetes control plane on
kubernetes) is strongly preferred, with the caveat below.
1. **Recovery from catastrophic failure:** The ability to quickly and
reliably recover a cluster from catastrophic failure is critical,
and should not be compromised by the above goal to self-host
(i.e. it goes without saying that the cluster should be quickly and
reliably recoverable, even if the cluster control plane is
broken). This implies that such catastrophic failure scenarios
should be carefully thought out, and the subject of regular
continuous integration testing, and disaster recovery exercises.
## Relative Priorities
1. **(Possibly manual) recovery from catastrophic failures:** having a
Kubernetes cluster, and all applications running inside it, disappear forever
perhaps is the worst possible failure mode. So it is critical that we be able to
recover the applications running inside a cluster from such failures in some
well-bounded time period.
1. In theory a cluster can be recovered by replaying all API calls
that have ever been executed against it, in order, but most
often that state has been lost, and/or is scattered across
multiple client applications or groups. So in general it is
probably infeasible.
1. In theory a cluster can also be recovered to some relatively
recent non-corrupt backup/snapshot of the disk(s) backing the
etcd cluster state. But we have no default consistent
backup/snapshot, verification or restoration process. And we
don't routinely test restoration, so even if we did routinely
perform and verify backups, we have no hard evidence that we
can in practise effectively recover from catastrophic cluster
failure or data corruption by restoring from these backups. So
there's more work to be done here.
1. **Self-healing:** Most major cloud providers provide the ability to
easily and automatically replace failed virtual machines within a
small number of minutes (e.g. GCE
[Auto-restart](https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options#autorestart)
and Managed Instance Groups,
AWS[ Auto-recovery](https://aws.amazon.com/blogs/aws/new-auto-recovery-for-amazon-ec2/)
and [Auto scaling](https://aws.amazon.com/autoscaling/) etc). This
can fairly trivially be used to reduce control-plane down-time due
to machine failure to a small number of minutes per failure
(i.e. typically around "3 nines" availability), provided that:
1. cluster persistent state (i.e. etcd disks) is either:
1. truely persistent (i.e. remote persistent disks), or
1. reconstructible (e.g. using etcd [dynamic member
addition](https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md#add-a-new-member)
or [backup and
recovery](https://github.com/coreos/etcd/blob/master/Documentation/admin_guide.md#disaster-recovery)).
1. and boot disks are either:
1. truely persistent (i.e. remote persistent disks), or
1. reconstructible (e.g. using boot-from-snapshot,
boot-from-pre-configured-image or
boot-from-auto-initializing image).
1. **High Availability:** This has the potential to increase
availability above the approximately "3 nines" level provided by
automated self-healing, but it's somewhat more complex, and
requires additional resources (e.g. redundant API servers and etcd
quorum members). In environments where cloud-assisted automatic
self-healing might be infeasible (e.g. on-premise bare-metal
deployments), it also gives cluster administrators more time to
respond (e.g. replace/repair failed machines) without incurring
system downtime.
## Design and Status (as of December 2015)
<table>
<tr>
<td><b>Control Plane Component</b></td>
<td><b>Resilience Plan</b></td>
<td><b>Current Status</b></td>
</tr>
<tr>
<td><b>API Server</b></td>
<td>
Multiple stateless, self-hosted, self-healing API servers behind a HA
load balancer, built out by the default "kube-up" automation on GCE,
AWS and basic bare metal (BBM). Note that the single-host approach of
having etcd listen only on localhost to ensure that only API server can
connect to it will no longer work, so alternative security will be
needed in the regard (either using firewall rules, SSL certs, or
something else). All necessary flags are currently supported to enable
SSL between API server and etcd (OpenShift runs like this out of the
box), but this needs to be woven into the "kube-up" and related
scripts. Detailed design of self-hosting and related bootstrapping
and catastrophic failure recovery will be detailed in a separate
design doc.
</td>
<td>
No scripted self-healing or HA on GCE, AWS or basic bare metal
currently exists in the OSS distro. To be clear, "no self healing"
means that even if multiple e.g. API servers are provisioned for HA
purposes, if they fail, nothing replaces them, so eventually the
system will fail. Self-healing and HA can be set up
manually by following documented instructions, but this is not
currently an automated process, and it is not tested as part of
continuous integration. So it's probably safest to assume that it
doesn't actually work in practise.
</td>
</tr>
<tr>
<td><b>Controller manager and scheduler</b></td>
<td>
Multiple self-hosted, self healing warm standby stateless controller
managers and schedulers with leader election and automatic failover of API
server clients, automatically installed by default "kube-up" automation.
</td>
<td>As above.</td>
</tr>
<tr>
<td><b>etcd</b></td>
<td>
Multiple (3-5) etcd quorum members behind a load balancer with session
affinity (to prevent clients from being bounced from one to another).
Regarding self-healing, if a node running etcd goes down, it is always necessary
to do three things:
<ol>
<li>allocate a new node (not necessary if running etcd as a pod, in
which case specific measures are required to prevent user pods from
interfering with system pods, for example using node selectors as
described in <A HREF="),
<li>start an etcd replica on that new node, and
<li>have the new replica recover the etcd state.
</ol>
In the case of local disk (which fails in concert with the machine), the etcd
state must be recovered from the other replicas. This is called
<A HREF="https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md#add-a-new-member">
dynamic member addition</A>.
In the case of remote persistent disk, the etcd state can be recovered by
attaching the remote persistent disk to the replacement node, thus the state is
recoverable even if all other replicas are down.
There are also significant performance differences between local disks and remote
persistent disks. For example, the
<A HREF="https://cloud.google.com/compute/docs/disks/#comparison_of_disk_types">
sustained throughput local disks in GCE is approximatley 20x that of remote
disks</A>.
Hence we suggest that self-healing be provided by remotely mounted persistent
disks in non-performance critical, single-zone cloud deployments. For
performance critical installations, faster local SSD's should be used, in which
case remounting on node failure is not an option, so
<A HREF="https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md ">
etcd runtime configuration</A> should be used to replace the failed machine.
Similarly, for cross-zone self-healing, cloud persistent disks are zonal, so
automatic <A HREF="https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md">
runtime configuration</A> is required. Similarly, basic bare metal deployments
cannot generally rely on remote persistent disks, so the same approach applies
there.
</td>
<td>
<A HREF="http://kubernetes.io/v1.1/docs/admin/high-availability.html">
Somewhat vague instructions exist</A> on how to set some of this up manually in
a self-hosted configuration. But automatic bootstrapping and self-healing is not
described (and is not implemented for the non-PD cases). This all still needs to
be automated and continuously tested.
</td>
</tr>
</table>
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/control-plane-resilience.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

206
vendor/k8s.io/kubernetes/docs/design/daemon.md generated vendored Normal file
View file

@ -0,0 +1,206 @@
# DaemonSet in Kubernetes
**Author**: Ananya Kumar (@AnanyaKumar)
**Status**: Implemented.
This document presents the design of the Kubernetes DaemonSet, describes use
cases, and gives an overview of the code.
## Motivation
Many users have requested for a way to run a daemon on every node in a
Kubernetes cluster, or on a certain set of nodes in a cluster. This is essential
for use cases such as building a sharded datastore, or running a logger on every
node. In comes the DaemonSet, a way to conveniently create and manage
daemon-like workloads in Kubernetes.
## Use Cases
The DaemonSet can be used for user-specified system services, cluster-level
applications with strong node ties, and Kubernetes node services. Below are
example use cases in each category.
### User-Specified System Services:
Logging: Some users want a way to collect statistics about nodes in a cluster
and send those logs to an external database. For example, system administrators
might want to know if their machines are performing as expected, if they need to
add more machines to the cluster, or if they should switch cloud providers. The
DaemonSet can be used to run a data collection service (for example fluentd) on
every node and send the data to a service like ElasticSearch for analysis.
### Cluster-Level Applications
Datastore: Users might want to implement a sharded datastore in their cluster. A
few nodes in the cluster, labeled app=datastore, might be responsible for
storing data shards, and pods running on these nodes might serve data. This
architecture requires a way to bind pods to specific nodes, so it cannot be
achieved using a Replication Controller. A DaemonSet is a convenient way to
implement such a datastore.
For other uses, see the related [feature request](https://issues.k8s.io/1518)
## Functionality
The DaemonSet supports standard API features:
- create
- The spec for DaemonSets has a pod template field.
- Using the pods nodeSelector field, DaemonSets can be restricted to operate
over nodes that have a certain label. For example, suppose that in a cluster
some nodes are labeled app=database. You can use a DaemonSet to launch a
datastore pod on exactly those nodes labeled app=database.
- Using the pod's nodeName field, DaemonSets can be restricted to operate on a
specified node.
- The PodTemplateSpec used by the DaemonSet is the same as the PodTemplateSpec
used by the Replication Controller.
- The initial implementation will not guarantee that DaemonSet pods are
created on nodes before other pods.
- The initial implementation of DaemonSet does not guarantee that DaemonSet
pods show up on nodes (for example because of resource limitations of the node),
but makes a best effort to launch DaemonSet pods (like Replication Controllers
do with pods). Subsequent revisions might ensure that DaemonSet pods show up on
nodes, preempting other pods if necessary.
- The DaemonSet controller adds an annotation:
```"kubernetes.io/created-by: \<json API object reference\>"```
- YAML example:
```YAML
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
app: datastore
name: datastore
spec:
template:
metadata:
labels:
app: datastore-shard
spec:
nodeSelector:
app: datastore-node
containers:
name: datastore-shard
image: kubernetes/sharded
ports:
- containerPort: 9042
name: main
```
- commands that get info:
- get (e.g. kubectl get daemonsets)
- describe
- Modifiers:
- delete (if --cascade=true, then first the client turns down all the pods
controlled by the DaemonSet (by setting the nodeSelector to a uuid pair that is
unlikely to be set on any node); then it deletes the DaemonSet; then it deletes
the pods)
- label
- annotate
- update operations like patch and replace (only allowed to selector and to
nodeSelector and nodeName of pod template)
- DaemonSets have labels, so you could, for example, list all DaemonSets
with certain labels (the same way you would for a Replication Controller).
In general, for all the supported features like get, describe, update, etc,
the DaemonSet works in a similar way to the Replication Controller. However,
note that the DaemonSet and the Replication Controller are different constructs.
### Persisting Pods
- Ordinary liveness probes specified in the pod template work to keep pods
created by a DaemonSet running.
- If a daemon pod is killed or stopped, the DaemonSet will create a new
replica of the daemon pod on the node.
### Cluster Mutations
- When a new node is added to the cluster, the DaemonSet controller starts
daemon pods on the node for DaemonSets whose pod template nodeSelectors match
the nodes labels.
- Suppose the user launches a DaemonSet that runs a logging daemon on all
nodes labeled “logger=fluentd”. If the user then adds the “logger=fluentd” label
to a node (that did not initially have the label), the logging daemon will
launch on the node. Additionally, if a user removes the label from a node, the
logging daemon on that node will be killed.
## Alternatives Considered
We considered several alternatives, that were deemed inferior to the approach of
creating a new DaemonSet abstraction.
One alternative is to include the daemon in the machine image. In this case it
would run outside of Kubernetes proper, and thus not be monitored, health
checked, usable as a service endpoint, easily upgradable, etc.
A related alternative is to package daemons as static pods. This would address
most of the problems described above, but they would still not be easily
upgradable, and more generally could not be managed through the API server
interface.
A third alternative is to generalize the Replication Controller. We would do
something like: if you set the `replicas` field of the ReplicationControllerSpec
to -1, then it means "run exactly one replica on every node matching the
nodeSelector in the pod template." The ReplicationController would pretend
`replicas` had been set to some large number -- larger than the largest number
of nodes ever expected in the cluster -- and would use some anti-affinity
mechanism to ensure that no more than one Pod from the ReplicationController
runs on any given node. There are two downsides to this approach. First,
there would always be a large number of Pending pods in the scheduler (these
will be scheduled onto new machines when they are added to the cluster). The
second downside is more philosophical: DaemonSet and the Replication Controller
are very different concepts. We believe that having small, targeted controllers
for distinct purposes makes Kubernetes easier to understand and use, compared to
having larger multi-functional controllers (see
["Convert ReplicationController to a plugin"](http://issues.k8s.io/3058) for
some discussion of this topic).
## Design
#### Client
- Add support for DaemonSet commands to kubectl and the client. Client code was
added to pkg/client/unversioned. The main files in Kubectl that were modified are
pkg/kubectl/describe.go and pkg/kubectl/stop.go, since for other calls like Get, Create,
and Update, the client simply forwards the request to the backend via the REST
API.
#### Apiserver
- Accept, parse, validate client commands
- REST API calls are handled in pkg/registry/daemonset
- In particular, the api server will add the object to etcd
- DaemonManager listens for updates to etcd (using Framework.informer)
- API objects for DaemonSet were created in expapi/v1/types.go and
expapi/v1/register.go
- Validation code is in expapi/validation
#### Daemon Manager
- Creates new DaemonSets when requested. Launches the corresponding daemon pod
on all nodes with labels matching the new DaemonSets selector.
- Listens for addition of new nodes to the cluster, by setting up a
framework.NewInformer that watches for the creation of Node API objects. When a
new node is added, the daemon manager will loop through each DaemonSet. If the
label of the node matches the selector of the DaemonSet, then the daemon manager
will create the corresponding daemon pod in the new node.
- The daemon manager creates a pod on a node by sending a command to the API
server, requesting for a pod to be bound to the node (the node will be specified
via its hostname.)
#### Kubelet
- Does not need to be modified, but health checking will occur for the daemon
pods and revive the pods if they are killed (we set the pod restartPolicy to
Always). We reject DaemonSet objects with pod templates that dont have
restartPolicy set to Always.
## Open Issues
- Should work similarly to [Deployment](http://issues.k8s.io/1743).
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/daemon.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,429 @@
# Enhance Pluggable Policy
While trying to develop an authorization plugin for Kubernetes, we found a few
places where API extensions would ease development and add power. There are a
few goals:
1. Provide an authorization plugin that can evaluate a .Authorize() call based
on the full content of the request to RESTStorage. This includes information
like the full verb, the content of creates and updates, and the names of
resources being acted upon.
1. Provide a way to ask whether a user is permitted to take an action without
running in process with the API Authorizer. For instance, a proxy for exec
calls could ask whether a user can run the exec they are requesting.
1. Provide a way to ask who can perform a given action on a given resource.
This is useful for answering questions like, "who can create replication
controllers in my namespace".
This proposal adds to and extends the existing API to so that authorizers may
provide the functionality described above. It does not attempt to describe how
the policies themselves can be expressed, that is up the authorization plugins
themselves.
## Enhancements to existing Authorization interfaces
The existing Authorization interfaces are described
[here](../admin/authorization.md). A couple additions will allow the development
of an Authorizer that matches based on different rules than the existing
implementation.
### Request Attributes
The existing authorizer.Attributes only has 5 attributes (user, groups,
isReadOnly, kind, and namespace). If we add more detailed verbs, content, and
resource names, then Authorizer plugins will have the same level of information
available to RESTStorage components in order to express more detailed policy.
The replacement excerpt is below.
An API request has the following attributes that can be considered for
authorization:
- user - the user-string which a user was authenticated as. This is included
in the Context.
- groups - the groups to which the user belongs. This is included in the
Context.
- verb - string describing the requesting action. Today we have: get, list,
watch, create, update, and delete. The old `readOnly` behavior is equivalent to
allowing get, list, watch.
- namespace - the namespace of the object being access, or the empty string if
the endpoint does not support namespaced objects. This is included in the
Context.
- resourceGroup - the API group of the resource being accessed
- resourceVersion - the API version of the resource being accessed
- resource - which resource is being accessed
- applies only to the API endpoints, such as `/api/v1beta1/pods`. For
miscellaneous endpoints, like `/version`, the kind is the empty string.
- resourceName - the name of the resource during a get, update, or delete
action.
- subresource - which subresource is being accessed
A non-API request has 2 attributes:
- verb - the HTTP verb of the request
- path - the path of the URL being requested
### Authorizer Interface
The existing Authorizer interface is very simple, but there isn't a way to
provide details about allows, denies, or failures. The extended detail is useful
for UIs that want to describe why certain actions are allowed or disallowed. Not
all Authorizers will want to provide that information, but for those that do,
having that capability is useful. In addition, adding a `GetAllowedSubjects`
method that returns back the users and groups that can perform a particular
action makes it possible to answer questions like, "who can see resources in my
namespace" (see [ResourceAccessReview](#ResourceAccessReview) further down).
```go
// OLD
type Authorizer interface {
Authorize(a Attributes) error
}
```
```go
// NEW
// Authorizer provides the ability to determine if a particular user can perform
// a particular action
type Authorizer interface {
// Authorize takes a Context (for namespace, user, and traceability) and
// Attributes to make a policy determination.
// reason is an optional return value that can describe why a policy decision
// was made. Reasons are useful during debugging when trying to figure out
// why a user or group has access to perform a particular action.
Authorize(ctx api.Context, a Attributes) (allowed bool, reason string, evaluationError error)
}
// AuthorizerIntrospection is an optional interface that provides the ability to
// determine which users and groups can perform a particular action. This is
// useful for building caches of who can see what. For instance, "which
// namespaces can this user see". That would allow someone to see only the
// namespaces they are allowed to view instead of having to choose between
// listing them all or listing none.
type AuthorizerIntrospection interface {
// GetAllowedSubjects takes a Context (for namespace and traceability) and
// Attributes to determine which users and groups are allowed to perform the
// described action in the namespace. This API enables the ResourceBasedReview
// requests below
GetAllowedSubjects(ctx api.Context, a Attributes) (users util.StringSet, groups util.StringSet, evaluationError error)
}
```
### SubjectAccessReviews
This set of APIs answers the question: can a user or group (use authenticated
user if none is specified) perform a given action. Given the Authorizer
interface (proposed or existing), this endpoint can be implemented generically
against any Authorizer by creating the correct Attributes and making an
.Authorize() call.
There are three different flavors:
1. `/apis/authorization.kubernetes.io/{version}/subjectAccessReviews` - this
checks to see if a specified user or group can perform a given action at the
cluster scope or across all namespaces. This is a highly privileged operation.
It allows a cluster-admin to inspect rights of any person across the entire
cluster and against cluster level resources.
2. `/apis/authorization.kubernetes.io/{version}/personalSubjectAccessReviews` -
this checks to see if the current user (including his groups) can perform a
given action at any specified scope. This is an unprivileged operation. It
doesn't expose any information that a user couldn't discover simply by trying an
endpoint themselves.
3. `/apis/authorization.kubernetes.io/{version}/ns/{namespace}/localSubjectAccessReviews` -
this checks to see if a specified user or group can perform a given action in
**this** namespace. This is a moderately privileged operation. In a multi-tenant
environment, having a namespace scoped resource makes it very easy to reason
about powers granted to a namespace admin. This allows a namespace admin
(someone able to manage permissions inside of one namespaces, but not all
namespaces), the power to inspect whether a given user or group can manipulate
resources in his namespace.
SubjectAccessReview is runtime.Object with associated RESTStorage that only
accepts creates. The caller POSTs a SubjectAccessReview to this URL and he gets
a SubjectAccessReviewResponse back. Here is an example of a call and its
corresponding return:
```
// input
{
"kind": "SubjectAccessReview",
"apiVersion": "authorization.kubernetes.io/v1",
"authorizationAttributes": {
"verb": "create",
"resource": "pods",
"user": "Clark",
"groups": ["admins", "managers"]
}
}
// POSTed like this
curl -X POST /apis/authorization.kubernetes.io/{version}/subjectAccessReviews -d @subject-access-review.json
// or
accessReviewResult, err := Client.SubjectAccessReviews().Create(subjectAccessReviewObject)
// output
{
"kind": "SubjectAccessReviewResponse",
"apiVersion": "authorization.kubernetes.io/v1",
"allowed": true
}
```
PersonalSubjectAccessReview is runtime.Object with associated RESTStorage that
only accepts creates. The caller POSTs a PersonalSubjectAccessReview to this URL
and he gets a SubjectAccessReviewResponse back. Here is an example of a call and
its corresponding return:
```
// input
{
"kind": "PersonalSubjectAccessReview",
"apiVersion": "authorization.kubernetes.io/v1",
"authorizationAttributes": {
"verb": "create",
"resource": "pods",
"namespace": "any-ns",
}
}
// POSTed like this
curl -X POST /apis/authorization.kubernetes.io/{version}/personalSubjectAccessReviews -d @personal-subject-access-review.json
// or
accessReviewResult, err := Client.PersonalSubjectAccessReviews().Create(subjectAccessReviewObject)
// output
{
"kind": "PersonalSubjectAccessReviewResponse",
"apiVersion": "authorization.kubernetes.io/v1",
"allowed": true
}
```
LocalSubjectAccessReview is runtime.Object with associated RESTStorage that only
accepts creates. The caller POSTs a LocalSubjectAccessReview to this URL and he
gets a LocalSubjectAccessReviewResponse back. Here is an example of a call and
its corresponding return:
```
// input
{
"kind": "LocalSubjectAccessReview",
"apiVersion": "authorization.kubernetes.io/v1",
"namespace": "my-ns"
"authorizationAttributes": {
"verb": "create",
"resource": "pods",
"user": "Clark",
"groups": ["admins", "managers"]
}
}
// POSTed like this
curl -X POST /apis/authorization.kubernetes.io/{version}/localSubjectAccessReviews -d @local-subject-access-review.json
// or
accessReviewResult, err := Client.LocalSubjectAccessReviews().Create(localSubjectAccessReviewObject)
// output
{
"kind": "LocalSubjectAccessReviewResponse",
"apiVersion": "authorization.kubernetes.io/v1",
"namespace": "my-ns"
"allowed": true
}
```
The actual Go objects look like this:
```go
type AuthorizationAttributes struct {
// Namespace is the namespace of the action being requested. Currently, there
// is no distinction between no namespace and all namespaces
Namespace string `json:"namespace" description:"namespace of the action being requested"`
// Verb is one of: get, list, watch, create, update, delete
Verb string `json:"verb" description:"one of get, list, watch, create, update, delete"`
// Resource is one of the existing resource types
ResourceGroup string `json:"resourceGroup" description:"group of the resource being requested"`
// ResourceVersion is the version of resource
ResourceVersion string `json:"resourceVersion" description:"version of the resource being requested"`
// Resource is one of the existing resource types
Resource string `json:"resource" description:"one of the existing resource types"`
// ResourceName is the name of the resource being requested for a "get" or
// deleted for a "delete"
ResourceName string `json:"resourceName" description:"name of the resource being requested for a get or delete"`
// Subresource is one of the existing subresources types
Subresource string `json:"subresource" description:"one of the existing subresources"`
}
// SubjectAccessReview is an object for requesting information about whether a
// user or group can perform an action
type SubjectAccessReview struct {
kapi.TypeMeta `json:",inline"`
// AuthorizationAttributes describes the action being tested.
AuthorizationAttributes `json:"authorizationAttributes" description:"the action being tested"`
// User is optional, but at least one of User or Groups must be specified
User string `json:"user" description:"optional, user to check"`
// Groups is optional, but at least one of User or Groups must be specified
Groups []string `json:"groups" description:"optional, list of groups to which the user belongs"`
}
// SubjectAccessReviewResponse describes whether or not a user or group can
// perform an action
type SubjectAccessReviewResponse struct {
kapi.TypeMeta
// Allowed is required. True if the action would be allowed, false otherwise.
Allowed bool
// Reason is optional. It indicates why a request was allowed or denied.
Reason string
}
// PersonalSubjectAccessReview is an object for requesting information about
// whether a user or group can perform an action
type PersonalSubjectAccessReview struct {
kapi.TypeMeta `json:",inline"`
// AuthorizationAttributes describes the action being tested.
AuthorizationAttributes `json:"authorizationAttributes" description:"the action being tested"`
}
// PersonalSubjectAccessReviewResponse describes whether this user can perform
// an action
type PersonalSubjectAccessReviewResponse struct {
kapi.TypeMeta
// Namespace is the namespace used for the access review
Namespace string
// Allowed is required. True if the action would be allowed, false otherwise.
Allowed bool
// Reason is optional. It indicates why a request was allowed or denied.
Reason string
}
// LocalSubjectAccessReview is an object for requesting information about
// whether a user or group can perform an action
type LocalSubjectAccessReview struct {
kapi.TypeMeta `json:",inline"`
// AuthorizationAttributes describes the action being tested.
AuthorizationAttributes `json:"authorizationAttributes" description:"the action being tested"`
// User is optional, but at least one of User or Groups must be specified
User string `json:"user" description:"optional, user to check"`
// Groups is optional, but at least one of User or Groups must be specified
Groups []string `json:"groups" description:"optional, list of groups to which the user belongs"`
}
// LocalSubjectAccessReviewResponse describes whether or not a user or group can
// perform an action
type LocalSubjectAccessReviewResponse struct {
kapi.TypeMeta
// Namespace is the namespace used for the access review
Namespace string
// Allowed is required. True if the action would be allowed, false otherwise.
Allowed bool
// Reason is optional. It indicates why a request was allowed or denied.
Reason string
}
```
### ResourceAccessReview
This set of APIs nswers the question: which users and groups can perform the
specified verb on the specified resourceKind. Given the Authorizer interface
described above, this endpoint can be implemented generically against any
Authorizer by calling the .GetAllowedSubjects() function.
There are two different flavors:
1. `/apis/authorization.kubernetes.io/{version}/resourceAccessReview` - this
checks to see which users and groups can perform a given action at the cluster
scope or across all namespaces. This is a highly privileged operation. It allows
a cluster-admin to inspect rights of all subjects across the entire cluster and
against cluster level resources.
2. `/apis/authorization.kubernetes.io/{version}/ns/{namespace}/localResourceAccessReviews` -
this checks to see which users and groups can perform a given action in **this**
namespace. This is a moderately privileged operation. In a multi-tenant
environment, having a namespace scoped resource makes it very easy to reason
about powers granted to a namespace admin. This allows a namespace admin
(someone able to manage permissions inside of one namespaces, but not all
namespaces), the power to inspect which users and groups can manipulate
resources in his namespace.
ResourceAccessReview is a runtime.Object with associated RESTStorage that only
accepts creates. The caller POSTs a ResourceAccessReview to this URL and he gets
a ResourceAccessReviewResponse back. Here is an example of a call and its
corresponding return:
```
// input
{
"kind": "ResourceAccessReview",
"apiVersion": "authorization.kubernetes.io/v1",
"authorizationAttributes": {
"verb": "list",
"resource": "replicationcontrollers"
}
}
// POSTed like this
curl -X POST /apis/authorization.kubernetes.io/{version}/resourceAccessReviews -d @resource-access-review.json
// or
accessReviewResult, err := Client.ResourceAccessReviews().Create(resourceAccessReviewObject)
// output
{
"kind": "ResourceAccessReviewResponse",
"apiVersion": "authorization.kubernetes.io/v1",
"namespace": "default"
"users": ["Clark", "Hubert"],
"groups": ["cluster-admins"]
}
```
The actual Go objects look like this:
```go
// ResourceAccessReview is a means to request a list of which users and groups
// are authorized to perform the action specified by spec
type ResourceAccessReview struct {
kapi.TypeMeta `json:",inline"`
// AuthorizationAttributes describes the action being tested.
AuthorizationAttributes `json:"authorizationAttributes" description:"the action being tested"`
}
// ResourceAccessReviewResponse describes who can perform the action
type ResourceAccessReviewResponse struct {
kapi.TypeMeta
// Users is the list of users who can perform the action
Users []string
// Groups is the list of groups who can perform the action
Groups []string
}
// LocalResourceAccessReview is a means to request a list of which users and
// groups are authorized to perform the action specified in a specific namespace
type LocalResourceAccessReview struct {
kapi.TypeMeta `json:",inline"`
// AuthorizationAttributes describes the action being tested.
AuthorizationAttributes `json:"authorizationAttributes" description:"the action being tested"`
}
// LocalResourceAccessReviewResponse describes who can perform the action
type LocalResourceAccessReviewResponse struct {
kapi.TypeMeta
// Namespace is the namespace used for the access review
Namespace string
// Users is the list of users who can perform the action
Users []string
// Groups is the list of groups who can perform the action
Groups []string
}
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/enhance-pluggable-policy.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

View file

@ -0,0 +1,169 @@
# Kubernetes Event Compression
This document captures the design of event compression.
## Background
Kubernetes components can get into a state where they generate tons of events.
The events can be categorized in one of two ways:
1. same - The event is identical to previous events except it varies only on
timestamp.
2. similar - The event is identical to previous events except it varies on
timestamp and message.
For example, when pulling a non-existing image, Kubelet will repeatedly generate
`image_not_existing` and `container_is_waiting` events until upstream components
correct the image. When this happens, the spam from the repeated events makes
the entire event mechanism useless. It also appears to cause memory pressure in
etcd (see [#3853](http://issue.k8s.io/3853)).
The goal is introduce event counting to increment same events, and event
aggregation to collapse similar events.
## Proposal
Each binary that generates events (for example, `kubelet`) should keep track of
previously generated events so that it can collapse recurring events into a
single event instead of creating a new instance for each new event. In addition,
if many similar events are created, events should be aggregated into a single
event to reduce spam.
Event compression should be best effort (not guaranteed). Meaning, in the worst
case, `n` identical (minus timestamp) events may still result in `n` event
entries.
## Design
Instead of a single Timestamp, each event object
[contains](http://releases.k8s.io/HEAD/pkg/api/types.go#L1111) the following
fields:
* `FirstTimestamp unversioned.Time`
* The date/time of the first occurrence of the event.
* `LastTimestamp unversioned.Time`
* The date/time of the most recent occurrence of the event.
* On first occurrence, this is equal to the FirstTimestamp.
* `Count int`
* The number of occurrences of this event between FirstTimestamp and
LastTimestamp.
* On first occurrence, this is 1.
Each binary that generates events:
* Maintains a historical record of previously generated events:
* Implemented with
["Least Recently Used Cache"](https://github.com/golang/groupcache/blob/master/lru/lru.go)
in [`pkg/client/record/events_cache.go`](../../pkg/client/record/events_cache.go).
* Implemented behind an `EventCorrelator` that manages two subcomponents:
`EventAggregator` and `EventLogger`.
* The `EventCorrelator` observes all incoming events and lets each
subcomponent visit and modify the event in turn.
* The `EventAggregator` runs an aggregation function over each event. This
function buckets each event based on an `aggregateKey` and identifies the event
uniquely with a `localKey` in that bucket.
* The default aggregation function groups similar events that differ only by
`event.Message`. Its `localKey` is `event.Message` and its aggregate key is
produced by joining:
* `event.Source.Component`
* `event.Source.Host`
* `event.InvolvedObject.Kind`
* `event.InvolvedObject.Namespace`
* `event.InvolvedObject.Name`
* `event.InvolvedObject.UID`
* `event.InvolvedObject.APIVersion`
* `event.Reason`
* If the `EventAggregator` observes a similar event produced 10 times in a 10
minute window, it drops the event that was provided as input and creates a new
event that differs only on the message. The message denotes that this event is
used to group similar events that matched on reason. This aggregated `Event` is
then used in the event processing sequence.
* The `EventLogger` observes the event out of `EventAggregation` and tracks
the number of times it has observed that event previously by incrementing a key
in a cache associated with that matching event.
* The key in the cache is generated from the event object minus
timestamps/count/transient fields, specifically the following events fields are
used to construct a unique key for an event:
* `event.Source.Component`
* `event.Source.Host`
* `event.InvolvedObject.Kind`
* `event.InvolvedObject.Namespace`
* `event.InvolvedObject.Name`
* `event.InvolvedObject.UID`
* `event.InvolvedObject.APIVersion`
* `event.Reason`
* `event.Message`
* The LRU cache is capped at 4096 events for both `EventAggregator` and
`EventLogger`. That means if a component (e.g. kubelet) runs for a long period
of time and generates tons of unique events, the previously generated events
cache will not grow unchecked in memory. Instead, after 4096 unique events are
generated, the oldest events are evicted from the cache.
* When an event is generated, the previously generated events cache is checked
(see [`pkg/client/unversioned/record/event.go`](http://releases.k8s.io/HEAD/pkg/client/record/event.go)).
* If the key for the new event matches the key for a previously generated
event (meaning all of the above fields match between the new event and some
previously generated event), then the event is considered to be a duplicate and
the existing event entry is updated in etcd:
* The new PUT (update) event API is called to update the existing event
entry in etcd with the new last seen timestamp and count.
* The event is also updated in the previously generated events cache with
an incremented count, updated last seen timestamp, name, and new resource
version (all required to issue a future event update).
* If the key for the new event does not match the key for any previously
generated event (meaning none of the above fields match between the new event
and any previously generated events), then the event is considered to be
new/unique and a new event entry is created in etcd:
* The usual POST/create event API is called to create a new event entry in
etcd.
* An entry for the event is also added to the previously generated events
cache.
## Issues/Risks
* Compression is not guaranteed, because each component keeps track of event
history in memory
* An application restart causes event history to be cleared, meaning event
history is not preserved across application restarts and compression will not
occur across component restarts.
* Because an LRU cache is used to keep track of previously generated events,
if too many unique events are generated, old events will be evicted from the
cache, so events will only be compressed until they age out of the events cache,
at which point any new instance of the event will cause a new entry to be
created in etcd.
## Example
Sample kubectl output:
```console
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-node-4.c.saad-dev-vms.internal Node starting {kubelet kubernetes-node-4.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-1.c.saad-dev-vms.internal Node starting {kubelet kubernetes-node-1.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-3.c.saad-dev-vms.internal Node starting {kubelet kubernetes-node-3.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-2.c.saad-dev-vms.internal Node starting {kubelet kubernetes-node-2.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-influx-grafana-controller-0133o Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 elasticsearch-logging-controller-fplln Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 kibana-logging-controller-gziey Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 skydns-ls6k1 Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-node-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest"
Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-node-4.c.saad-dev-vms.internal
```
This demonstrates what would have been 20 separate entries (indicating
scheduling failure) collapsed/compressed down to 5 entries.
## Related Pull Requests/Issues
* Issue [#4073](http://issue.k8s.io/4073): Compress duplicate events.
* PR [#4157](http://issue.k8s.io/4157): Add "Update Event" to Kubernetes API.
* PR [#4206](http://issue.k8s.io/4206): Modify Event struct to allow
compressing multiple recurring events in to a single event.
* PR [#4306](http://issue.k8s.io/4306): Compress recurring events in to a
single event to optimize etcd storage.
* PR [#4444](http://pr.k8s.io/4444): Switch events history to use LRU cache
instead of map.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/event_compression.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

417
vendor/k8s.io/kubernetes/docs/design/expansion.md generated vendored Normal file
View file

@ -0,0 +1,417 @@
# Variable expansion in pod command, args, and env
## Abstract
A proposal for the expansion of environment variables using a simple `$(var)`
syntax.
## Motivation
It is extremely common for users to need to compose environment variables or
pass arguments to their commands using the values of environment variables.
Kubernetes should provide a facility for the 80% cases in order to decrease
coupling and the use of workarounds.
## Goals
1. Define the syntax format
2. Define the scoping and ordering of substitutions
3. Define the behavior for unmatched variables
4. Define the behavior for unexpected/malformed input
## Constraints and Assumptions
* This design should describe the simplest possible syntax to accomplish the
use-cases.
* Expansion syntax will not support more complicated shell-like behaviors such
as default values (viz: `$(VARIABLE_NAME:"default")`), inline substitution, etc.
## Use Cases
1. As a user, I want to compose new environment variables for a container using
a substitution syntax to reference other variables in the container's
environment and service environment variables.
1. As a user, I want to substitute environment variables into a container's
command.
1. As a user, I want to do the above without requiring the container's image to
have a shell.
1. As a user, I want to be able to specify a default value for a service
variable which may not exist.
1. As a user, I want to see an event associated with the pod if an expansion
fails (ie, references variable names that cannot be expanded).
### Use Case: Composition of environment variables
Currently, containers are injected with docker-style environment variables for
the services in their pod's namespace. There are several variables for each
service, but users routinely need to compose URLs based on these variables
because there is not a variable for the exact format they need. Users should be
able to build new environment variables with the exact format they need.
Eventually, it should also be possible to turn off the automatic injection of
the docker-style variables into pods and let the users consume the exact
information they need via the downward API and composition.
#### Expanding expanded variables
It should be possible to reference an variable which is itself the result of an
expansion, if the referenced variable is declared in the container's environment
prior to the one referencing it. Put another way -- a container's environment is
expanded in order, and expanded variables are available to subsequent
expansions.
### Use Case: Variable expansion in command
Users frequently need to pass the values of environment variables to a
container's command. Currently, Kubernetes does not perform any expansion of
variables. The workaround is to invoke a shell in the container's command and
have the shell perform the substitution, or to write a wrapper script that sets
up the environment and runs the command. This has a number of drawbacks:
1. Solutions that require a shell are unfriendly to images that do not contain
a shell.
2. Wrapper scripts make it harder to use images as base images.
3. Wrapper scripts increase coupling to Kubernetes.
Users should be able to do the 80% case of variable expansion in command without
writing a wrapper script or adding a shell invocation to their containers'
commands.
### Use Case: Images without shells
The current workaround for variable expansion in a container's command requires
the container's image to have a shell. This is unfriendly to images that do not
contain a shell (`scratch` images, for example). Users should be able to perform
the other use-cases in this design without regard to the content of their
images.
### Use Case: See an event for incomplete expansions
It is possible that a container with incorrect variable values or command line
may continue to run for a long period of time, and that the end-user would have
no visual or obvious warning of the incorrect configuration. If the kubelet
creates an event when an expansion references a variable that cannot be
expanded, it will help users quickly detect problems with expansions.
## Design Considerations
### What features should be supported?
In order to limit complexity, we want to provide the right amount of
functionality so that the 80% cases can be realized and nothing more. We felt
that the essentials boiled down to:
1. Ability to perform direct expansion of variables in a string.
2. Ability to specify default values via a prioritized mapping function but
without support for defaults as a syntax-level feature.
### What should the syntax be?
The exact syntax for variable expansion has a large impact on how users perceive
and relate to the feature. We considered implementing a very restrictive subset
of the shell `${var}` syntax. This syntax is an attractive option on some level,
because many people are familiar with it. However, this syntax also has a large
number of lesser known features such as the ability to provide default values
for unset variables, perform inline substitution, etc.
In the interest of preventing conflation of the expansion feature in Kubernetes
with the shell feature, we chose a different syntax similar to the one in
Makefiles, `$(var)`. We also chose not to support the bar `$var` format, since
it is not required to implement the required use-cases.
Nested references, ie, variable expansion within variable names, are not
supported.
#### How should unmatched references be treated?
Ideally, it should be extremely clear when a variable reference couldn't be
expanded. We decided the best experience for unmatched variable references would
be to have the entire reference, syntax included, show up in the output. As an
example, if the reference `$(VARIABLE_NAME)` cannot be expanded, then
`$(VARIABLE_NAME)` should be present in the output.
#### Escaping the operator
Although the `$(var)` syntax does overlap with the `$(command)` form of command
substitution supported by many shells, because unexpanded variables are present
verbatim in the output, we expect this will not present a problem to many users.
If there is a collision between a variable name and command substitution syntax,
the syntax can be escaped with the form `$$(VARIABLE_NAME)`, which will evaluate
to `$(VARIABLE_NAME)` whether `VARIABLE_NAME` can be expanded or not.
## Design
This design encompasses the variable expansion syntax and specification and the
changes needed to incorporate the expansion feature into the container's
environment and command.
### Syntax and expansion mechanics
This section describes the expansion syntax, evaluation of variable values, and
how unexpected or malformed inputs are handled.
#### Syntax
The inputs to the expansion feature are:
1. A utf-8 string (the input string) which may contain variable references.
2. A function (the mapping function) that maps the name of a variable to the
variable's value, of type `func(string) string`.
Variable references in the input string are indicated exclusively with the syntax
`$(<variable-name>)`. The syntax tokens are:
- `$`: the operator,
- `(`: the reference opener, and
- `)`: the reference closer.
The operator has no meaning unless accompanied by the reference opener and
closer tokens. The operator can be escaped using `$$`. One literal `$` will be
emitted for each `$$` in the input.
The reference opener and closer characters have no meaning when not part of a
variable reference. If a variable reference is malformed, viz: `$(VARIABLE_NAME`
without a closing expression, the operator and expression opening characters are
treated as ordinary characters without special meanings.
#### Scope and ordering of substitutions
The scope in which variable references are expanded is defined by the mapping
function. Within the mapping function, any arbitrary strategy may be used to
determine the value of a variable name. The most basic implementation of a
mapping function is to use a `map[string]string` to lookup the value of a
variable.
In order to support default values for variables like service variables
presented by the kubelet, which may not be bound because the service that
provides them does not yet exist, there should be a mapping function that uses a
list of `map[string]string` like:
```go
func MakeMappingFunc(maps ...map[string]string) func(string) string {
return func(input string) string {
for _, context := range maps {
val, ok := context[input]
if ok {
return val
}
}
return ""
}
}
// elsewhere
containerEnv := map[string]string{
"FOO": "BAR",
"ZOO": "ZAB",
"SERVICE2_HOST": "some-host",
}
serviceEnv := map[string]string{
"SERVICE_HOST": "another-host",
"SERVICE_PORT": "8083",
}
// single-map variation
mapping := MakeMappingFunc(containerEnv)
// default variables not found in serviceEnv
mappingWithDefaults := MakeMappingFunc(serviceEnv, containerEnv)
```
### Implementation changes
The necessary changes to implement this functionality are:
1. Add a new interface, `ObjectEventRecorder`, which is like the
`EventRecorder` interface, but scoped to a single object, and a function that
returns an `ObjectEventRecorder` given an `ObjectReference` and an
`EventRecorder`.
2. Introduce `third_party/golang/expansion` package that provides:
1. An `Expand(string, func(string) string) string` function.
2. A `MappingFuncFor(ObjectEventRecorder, ...map[string]string) string`
function.
3. Make the kubelet expand environment correctly.
4. Make the kubelet expand command correctly.
#### Event Recording
In order to provide an event when an expansion references undefined variables,
the mapping function must be able to create an event. In order to facilitate
this, we should create a new interface in the `api/client/record` package which
is similar to `EventRecorder`, but scoped to a single object:
```go
// ObjectEventRecorder knows how to record events about a single object.
type ObjectEventRecorder interface {
// Event constructs an event from the given information and puts it in the queue for sending.
// 'reason' is the reason this event is generated. 'reason' should be short and unique; it will
// be used to automate handling of events, so imagine people writing switch statements to
// handle them. You want to make that easy.
// 'message' is intended to be human readable.
//
// The resulting event will be created in the same namespace as the reference object.
Event(reason, message string)
// Eventf is just like Event, but with Sprintf for the message field.
Eventf(reason, messageFmt string, args ...interface{})
// PastEventf is just like Eventf, but with an option to specify the event's 'timestamp' field.
PastEventf(timestamp unversioned.Time, reason, messageFmt string, args ...interface{})
}
```
There should also be a function that can construct an `ObjectEventRecorder` from a `runtime.Object`
and an `EventRecorder`:
```go
type objectRecorderImpl struct {
object runtime.Object
recorder EventRecorder
}
func (r *objectRecorderImpl) Event(reason, message string) {
r.recorder.Event(r.object, reason, message)
}
func ObjectEventRecorderFor(object runtime.Object, recorder EventRecorder) ObjectEventRecorder {
return &objectRecorderImpl{object, recorder}
}
```
#### Expansion package
The expansion package should provide two methods:
```go
// MappingFuncFor returns a mapping function for use with Expand that
// implements the expansion semantics defined in the expansion spec; it
// returns the input string wrapped in the expansion syntax if no mapping
// for the input is found. If no expansion is found for a key, an event
// is raised on the given recorder.
func MappingFuncFor(recorder record.ObjectEventRecorder, context ...map[string]string) func(string) string {
// ...
}
// Expand replaces variable references in the input string according to
// the expansion spec using the given mapping function to resolve the
// values of variables.
func Expand(input string, mapping func(string) string) string {
// ...
}
```
#### Kubelet changes
The Kubelet should be made to correctly expand variables references in a
container's environment, command, and args. Changes will need to be made to:
1. The `makeEnvironmentVariables` function in the kubelet; this is used by
`GenerateRunContainerOptions`, which is used by both the docker and rkt
container runtimes.
2. The docker manager `setEntrypointAndCommand` func has to be changed to
perform variable expansion.
3. The rkt runtime should be made to support expansion in command and args
when support for it is implemented.
### Examples
#### Inputs and outputs
These examples are in the context of the mapping:
| Name | Value |
|-------------|------------|
| `VAR_A` | `"A"` |
| `VAR_B` | `"B"` |
| `VAR_C` | `"C"` |
| `VAR_REF` | `$(VAR_A)` |
| `VAR_EMPTY` | `""` |
No other variables are defined.
| Input | Result |
|--------------------------------|----------------------------|
| `"$(VAR_A)"` | `"A"` |
| `"___$(VAR_B)___"` | `"___B___"` |
| `"___$(VAR_C)"` | `"___C"` |
| `"$(VAR_A)-$(VAR_A)"` | `"A-A"` |
| `"$(VAR_A)-1"` | `"A-1"` |
| `"$(VAR_A)_$(VAR_B)_$(VAR_C)"` | `"A_B_C"` |
| `"$$(VAR_B)_$(VAR_A)"` | `"$(VAR_B)_A"` |
| `"$$(VAR_A)_$$(VAR_B)"` | `"$(VAR_A)_$(VAR_B)"` |
| `"f000-$$VAR_A"` | `"f000-$VAR_A"` |
| `"foo\\$(VAR_C)bar"` | `"foo\Cbar"` |
| `"foo\\\\$(VAR_C)bar"` | `"foo\\Cbar"` |
| `"foo\\\\\\\\$(VAR_A)bar"` | `"foo\\\\Abar"` |
| `"$(VAR_A$(VAR_B))"` | `"$(VAR_A$(VAR_B))"` |
| `"$(VAR_A$(VAR_B)"` | `"$(VAR_A$(VAR_B)"` |
| `"$(VAR_REF)"` | `"$(VAR_A)"` |
| `"%%$(VAR_REF)--$(VAR_REF)%%"` | `"%%$(VAR_A)--$(VAR_A)%%"` |
| `"foo$(VAR_EMPTY)bar"` | `"foobar"` |
| `"foo$(VAR_Awhoops!"` | `"foo$(VAR_Awhoops!"` |
| `"f00__(VAR_A)__"` | `"f00__(VAR_A)__"` |
| `"$?_boo_$!"` | `"$?_boo_$!"` |
| `"$VAR_A"` | `"$VAR_A"` |
| `"$(VAR_DNE)"` | `"$(VAR_DNE)"` |
| `"$$$$$$(BIG_MONEY)"` | `"$$$(BIG_MONEY)"` |
| `"$$$$$$(VAR_A)"` | `"$$$(VAR_A)"` |
| `"$$$$$$$(GOOD_ODDS)"` | `"$$$$(GOOD_ODDS)"` |
| `"$$$$$$$(VAR_A)"` | `"$$$A"` |
| `"$VAR_A)"` | `"$VAR_A)"` |
| `"${VAR_A}"` | `"${VAR_A}"` |
| `"$(VAR_B)_______$(A"` | `"B_______$(A"` |
| `"$(VAR_C)_______$("` | `"C_______$("` |
| `"$(VAR_A)foobarzab$"` | `"Afoobarzab$"` |
| `"foo-\\$(VAR_A"` | `"foo-\$(VAR_A"` |
| `"--$($($($($--"` | `"--$($($($($--"` |
| `"$($($($($--foo$("` | `"$($($($($--foo$("` |
| `"foo0--$($($($("` | `"foo0--$($($($("` |
| `"$(foo$$var)` | `$(foo$$var)` |
#### In a pod: building a URL
Notice the `$(var)` syntax.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: expansion-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: PUBLIC_URL
value: "http://$(GITSERVER_SERVICE_HOST):$(GITSERVER_SERVICE_PORT)"
restartPolicy: Never
```
#### In a pod: building a URL using downward API
```yaml
apiVersion: v1
kind: Pod
metadata:
name: expansion-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: "metadata.namespace"
- name: PUBLIC_URL
value: "http://gitserver.$(POD_NAMESPACE):$(SERVICE_PORT)"
restartPolicy: Never
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/expansion.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

203
vendor/k8s.io/kubernetes/docs/design/extending-api.md generated vendored Normal file
View file

@ -0,0 +1,203 @@
# Adding custom resources to the Kubernetes API server
This document describes the design for implementing the storage of custom API
types in the Kubernetes API Server.
## Resource Model
### The ThirdPartyResource
The `ThirdPartyResource` resource describes the multiple versions of a custom
resource that the user wants to add to the Kubernetes API. `ThirdPartyResource`
is a non-namespaced resource; attempting to place it in a namespace will return
an error.
Each `ThirdPartyResource` resource has the following:
* Standard Kubernetes object metadata.
* ResourceKind - The kind of the resources described by this third party
resource.
* Description - A free text description of the resource.
* APIGroup - An API group that this resource should be placed into.
* Versions - One or more `Version` objects.
### The `Version` Object
The `Version` object describes a single concrete version of a custom resource.
The `Version` object currently only specifies:
* The `Name` of the version.
* The `APIGroup` this version should belong to.
## Expectations about third party objects
Every object that is added to a third-party Kubernetes object store is expected
to contain Kubernetes compatible [object metadata](../devel/api-conventions.md#metadata).
This requirement enables the Kubernetes API server to provide the following
features:
* Filtering lists of objects via label queries.
* `resourceVersion`-based optimistic concurrency via compare-and-swap.
* Versioned storage.
* Event recording.
* Integration with basic `kubectl` command line tooling.
* Watch for resource changes.
The `Kind` for an instance of a third-party object (e.g. CronTab) below is
expected to be programmatically convertible to the name of the resource using
the following conversion. Kinds are expected to be of the form
`<CamelCaseKind>`, and the `APIVersion` for the object is expected to be
`<api-group>/<api-version>`. To prevent collisions, it's expected that you'll
use a DNS name of at least three segments for the API group, e.g. `mygroup.example.com`.
For example `mygroup.example.com/v1`
'CamelCaseKind' is the specific type name.
To convert this into the `metadata.name` for the `ThirdPartyResource` resource
instance, the `<domain-name>` is copied verbatim, the `CamelCaseKind` is then
converted using '-' instead of capitalization ('camel-case'), with the first
character being assumed to be capitalized. In pseudo code:
```go
var result string
for ix := range kindName {
if isCapital(kindName[ix]) {
result = append(result, '-')
}
result = append(result, toLowerCase(kindName[ix])
}
```
As a concrete example, the resource named `camel-case-kind.mygroup.example.com` defines
resources of Kind `CamelCaseKind`, in the APIGroup with the prefix
`mygroup.example.com/...`.
The reason for this is to enable rapid lookup of a `ThirdPartyResource` object
given the kind information. This is also the reason why `ThirdPartyResource` is
not namespaced.
## Usage
When a user creates a new `ThirdPartyResource`, the Kubernetes API Server reacts
by creating a new, namespaced RESTful resource path. For now, non-namespaced
objects are not supported. As with existing built-in objects, deleting a
namespace deletes all third party resources in that namespace.
For example, if a user creates:
```yaml
metadata:
name: cron-tab.mygroup.example.com
apiVersion: extensions/v1beta1
kind: ThirdPartyResource
description: "A specification of a Pod to run on a cron style schedule"
versions:
- name: v1
- name: v2
```
Then the API server will program in the new RESTful resource path:
* `/apis/mygroup.example.com/v1/namespaces/<namespace>/crontabs/...`
**Note: This may take a while before RESTful resource path registration happen, please
always check this before you create resource instances.**
Now that this schema has been created, a user can `POST`:
```json
{
"metadata": {
"name": "my-new-cron-object"
},
"apiVersion": "mygroup.example.com/v1",
"kind": "CronTab",
"cronSpec": "* * * * /5",
"image": "my-awesome-cron-image"
}
```
to: `/apis/mygroup.example.com/v1/namespaces/default/crontabs`
and the corresponding data will be stored into etcd by the APIServer, so that
when the user issues:
```
GET /apis/mygroup.example.com/v1/namespaces/default/crontabs/my-new-cron-object`
```
And when they do that, they will get back the same data, but with additional
Kubernetes metadata (e.g. `resourceVersion`, `createdTimestamp`) filled in.
Likewise, to list all resources, a user can issue:
```
GET /apis/mygroup.example.com/v1/namespaces/default/crontabs
```
and get back:
```json
{
"apiVersion": "mygroup.example.com/v1",
"kind": "CronTabList",
"items": [
{
"metadata": {
"name": "my-new-cron-object"
},
"apiVersion": "mygroup.example.com/v1",
"kind": "CronTab",
"cronSpec": "* * * * /5",
"image": "my-awesome-cron-image"
}
]
}
```
Because all objects are expected to contain standard Kubernetes metadata fields,
these list operations can also use label queries to filter requests down to
specific subsets.
Likewise, clients can use watch endpoints to watch for changes to stored
objects.
## Storage
In order to store custom user data in a versioned fashion inside of etcd, we
need to also introduce a `Codec`-compatible object for persistent storage in
etcd. This object is `ThirdPartyResourceData` and it contains:
* Standard API Metadata.
* `Data`: The raw JSON data for this custom object.
### Storage key specification
Each custom object stored by the API server needs a custom key in storage, this
is described below:
#### Definitions
* `resource-namespace`: the namespace of the particular resource that is
being stored
* `resource-name`: the name of the particular resource being stored
* `third-party-resource-namespace`: the namespace of the `ThirdPartyResource`
resource that represents the type for the specific instance being stored
* `third-party-resource-name`: the name of the `ThirdPartyResource` resource
that represents the type for the specific instance being stored
#### Key
Given the definitions above, the key for a specific third-party object is:
```
${standard-k8s-prefix}/third-party-resources/${third-party-resource-namespace}/${third-party-resource-name}/${resource-namespace}/${resource-name}
```
Thus, listing a third-party resource can be achieved by listing the directory:
```
${standard-k8s-prefix}/third-party-resources/${third-party-resource-namespace}/${third-party-resource-name}/${resource-namespace}/
```
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/extending-api.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,407 @@
# Ubernetes Design Spec (phase one)
**Huawei PaaS Team**
## INTRODUCTION
In this document we propose a design for the “Control Plane” of
Kubernetes (K8S) federation (a.k.a. “Ubernetes”). For background of
this work please refer to
[this proposal](../../docs/proposals/federation.md).
The document is arranged as following. First we briefly list scenarios
and use cases that motivate K8S federation work. These use cases drive
the design and they also verify the design. We summarize the
functionality requirements from these use cases, and define the “in
scope” functionalities that will be covered by this design (phase
one). After that we give an overview of the proposed architecture, API
and building blocks. And also we go through several activity flows to
see how these building blocks work together to support use cases.
## REQUIREMENTS
There are many reasons why customers may want to build a K8S
federation:
+ **High Availability:** Customers want to be immune to the outage of
a single availability zone, region or even a cloud provider.
+ **Sensitive workloads:** Some workloads can only run on a particular
cluster. They cannot be scheduled to or migrated to other clusters.
+ **Capacity overflow:** Customers prefer to run workloads on a
primary cluster. But if the capacity of the cluster is not
sufficient, workloads should be automatically distributed to other
clusters.
+ **Vendor lock-in avoidance:** Customers want to spread their
workloads on different cloud providers, and can easily increase or
decrease the workload proportion of a specific provider.
+ **Cluster Size Enhancement:** Currently K8S cluster can only support
a limited size. While the community is actively improving it, it can
be expected that cluster size will be a problem if K8S is used for
large workloads or public PaaS infrastructure. While we can separate
different tenants to different clusters, it would be good to have a
unified view.
Here are the functionality requirements derived from above use cases:
+ Clients of the federation control plane API server can register and deregister
clusters.
+ Workloads should be spread to different clusters according to the
workload distribution policy.
+ Pods are able to discover and connect to services hosted in other
clusters (in cases where inter-cluster networking is necessary,
desirable and implemented).
+ Traffic to these pods should be spread across clusters (in a manner
similar to load balancing, although it might not be strictly
speaking balanced).
+ The control plane needs to know when a cluster is down, and migrate
the workloads to other clusters.
+ Clients have a unified view and a central control point for above
activities.
## SCOPE
Its difficult to have a perfect design with one click that implements
all the above requirements. Therefore we will go with an iterative
approach to design and build the system. This document describes the
phase one of the whole work. In phase one we will cover only the
following objectives:
+ Define the basic building blocks and API objects of control plane
+ Implement a basic end-to-end workflow
+ Clients register federated clusters
+ Clients submit a workload
+ The workload is distributed to different clusters
+ Service discovery
+ Load balancing
The following parts are NOT covered in phase one:
+ Authentication and authorization (other than basic client
authentication against the ubernetes API, and from ubernetes control
plane to the underlying kubernetes clusters).
+ Deployment units other than replication controller and service
+ Complex distribution policy of workloads
+ Service affinity and migration
## ARCHITECTURE
The overall architecture of a control plane is shown as following:
![Ubernetes Architecture](ubernetes-design.png)
Some design principles we are following in this architecture:
1. Keep the underlying K8S clusters independent. They should have no
knowledge of control plane or of each other.
1. Keep the Ubernetes API interface compatible with K8S API as much as
possible.
1. Re-use concepts from K8S as much as possible. This reduces
customers learning curve and is good for adoption. Below is a brief
description of each module contained in above diagram.
## Ubernetes API Server
The API Server in the Ubernetes control plane works just like the API
Server in K8S. It talks to a distributed key-value store to persist,
retrieve and watch API objects. This store is completely distinct
from the kubernetes key-value stores (etcd) in the underlying
kubernetes clusters. We still use `etcd` as the distributed
storage so customers dont need to learn and manage a different
storage system, although it is envisaged that other storage systems
(consol, zookeeper) will probably be developedand supported over
time.
## Ubernetes Scheduler
The Ubernetes Scheduler schedules resources onto the underlying
Kubernetes clusters. For example it watches for unscheduled Ubernetes
replication controllers (those that have not yet been scheduled onto
underlying Kubernetes clusters) and performs the global scheduling
work. For each unscheduled replication controller, it calls policy
engine to decide how to spit workloads among clusters. It creates a
Kubernetes Replication Controller on one ore more underlying cluster,
and post them back to `etcd` storage.
One sublety worth noting here is that the scheduling decision is arrived at by
combining the application-specific request from the user (which might
include, for example, placement constraints), and the global policy specified
by the federation administrator (for example, "prefer on-premise
clusters over AWS clusters" or "spread load equally across clusters").
## Ubernetes Cluster Controller
The cluster controller
performs the following two kinds of work:
1. It watches all the sub-resources that are created by Ubernetes
components, like a sub-RC or a sub-service. And then it creates the
corresponding API objects on the underlying K8S clusters.
1. It periodically retrieves the available resources metrics from the
underlying K8S cluster, and updates them as object status of the
`cluster` API object. An alternative design might be to run a pod
in each underlying cluster that reports metrics for that cluster to
the Ubernetes control plane. Which approach is better remains an
open topic of discussion.
## Ubernetes Service Controller
The Ubernetes service controller is a federation-level implementation
of K8S service controller. It watches service resources created on
control plane, creates corresponding K8S services on each involved K8S
clusters. Besides interacting with services resources on each
individual K8S clusters, the Ubernetes service controller also
performs some global DNS registration work.
## API OBJECTS
## Cluster
Cluster is a new first-class API object introduced in this design. For
each registered K8S cluster there will be such an API resource in
control plane. The way clients register or deregister a cluster is to
send corresponding REST requests to following URL:
`/api/{$version}/clusters`. Because control plane is behaving like a
regular K8S client to the underlying clusters, the spec of a cluster
object contains necessary properties like K8S cluster address and
credentials. The status of a cluster API object will contain
following information:
1. Which phase of its lifecycle
1. Cluster resource metrics for scheduling decisions.
1. Other metadata like the version of cluster
$version.clusterSpec
<table style="border:1px solid #000000;border-collapse:collapse;">
<tbody>
<tr>
<td style="padding:5px;"><b>Name</b><br>
</td>
<td style="padding:5px;"><b>Description</b><br>
</td>
<td style="padding:5px;"><b>Required</b><br>
</td>
<td style="padding:5px;"><b>Schema</b><br>
</td>
<td style="padding:5px;"><b>Default</b><br>
</td>
</tr>
<tr>
<td style="padding:5px;">Address<br>
</td>
<td style="padding:5px;">address of the cluster<br>
</td>
<td style="padding:5px;">yes<br>
</td>
<td style="padding:5px;">address<br>
</td>
<td style="padding:5px;"><p></p></td>
</tr>
<tr>
<td style="padding:5px;">Credential<br>
</td>
<td style="padding:5px;">the type (e.g. bearer token, client
certificate etc) and data of the credential used to access cluster. Its used for system routines (not behalf of users)<br>
</td>
<td style="padding:5px;">yes<br>
</td>
<td style="padding:5px;">string <br>
</td>
<td style="padding:5px;"><p></p></td>
</tr>
</tbody>
</table>
$version.clusterStatus
<table style="border:1px solid #000000;border-collapse:collapse;">
<tbody>
<tr>
<td style="padding:5px;"><b>Name</b><br>
</td>
<td style="padding:5px;"><b>Description</b><br>
</td>
<td style="padding:5px;"><b>Required</b><br>
</td>
<td style="padding:5px;"><b>Schema</b><br>
</td>
<td style="padding:5px;"><b>Default</b><br>
</td>
</tr>
<tr>
<td style="padding:5px;">Phase<br>
</td>
<td style="padding:5px;">the recently observed lifecycle phase of the cluster<br>
</td>
<td style="padding:5px;">yes<br>
</td>
<td style="padding:5px;">enum<br>
</td>
<td style="padding:5px;"><p></p></td>
</tr>
<tr>
<td style="padding:5px;">Capacity<br>
</td>
<td style="padding:5px;">represents the available resources of a cluster<br>
</td>
<td style="padding:5px;">yes<br>
</td>
<td style="padding:5px;">any<br>
</td>
<td style="padding:5px;"><p></p></td>
</tr>
<tr>
<td style="padding:5px;">ClusterMeta<br>
</td>
<td style="padding:5px;">Other cluster metadata like the version<br>
</td>
<td style="padding:5px;">yes<br>
</td>
<td style="padding:5px;">ClusterMeta<br>
</td>
<td style="padding:5px;"><p></p></td>
</tr>
</tbody>
</table>
**For simplicity we didnt introduce a separate “cluster metrics” API
object here**. The cluster resource metrics are stored in cluster
status section, just like what we did to nodes in K8S. In phase one it
only contains available CPU resources and memory resources. The
cluster controller will periodically poll the underlying cluster API
Server to get cluster capability. In phase one it gets the metrics by
simply aggregating metrics from all nodes. In future we will improve
this with more efficient ways like leveraging heapster, and also more
metrics will be supported. Similar to node phases in K8S, the “phase”
field includes following values:
+ pending: newly registered clusters or clusters suspended by admin
for various reasons. They are not eligible for accepting workloads
+ running: clusters in normal status that can accept workloads
+ offline: clusters temporarily down or not reachable
+ terminated: clusters removed from federation
Below is the state transition diagram.
![Cluster State Transition Diagram](ubernetes-cluster-state.png)
## Replication Controller
A global workload submitted to control plane is represented as a
replication controller in the Cluster Federation control plane. When a replication controller
is submitted to control plane, clients need a way to express its
requirements or preferences on clusters. Depending on different use
cases it may be complex. For example:
+ This workload can only be scheduled to cluster Foo. It cannot be
scheduled to any other clusters. (use case: sensitive workloads).
+ This workload prefers cluster Foo. But if there is no available
capacity on cluster Foo, its OK to be scheduled to cluster Bar
(use case: workload )
+ Seventy percent of this workload should be scheduled to cluster Foo,
and thirty percent should be scheduled to cluster Bar (use case:
vendor lock-in avoidance). In phase one, we only introduce a
_clusterSelector_ field to filter acceptable clusters. In default
case there is no such selector and it means any cluster is
acceptable.
Below is a sample of the YAML to create such a replication controller.
```
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-controller
spec:
replicas: 5
selector:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
clusterSelector:
name in (Foo, Bar)
```
Currently clusterSelector (implemented as a
[LabelSelector](../../pkg/apis/extensions/v1beta1/types.go#L704))
only supports a simple list of acceptable clusters. Workloads will be
evenly distributed on these acceptable clusters in phase one. After
phase one we will define syntax to represent more advanced
constraints, like cluster preference ordering, desired number of
splitted workloads, desired ratio of workloads spread on different
clusters, etc.
Besides this explicit “clusterSelector” filter, a workload may have
some implicit scheduling restrictions. For example it defines
“nodeSelector” which can only be satisfied on some particular
clusters. How to handle this will be addressed after phase one.
## Federated Services
The Service API object exposed by the Cluster Federation is similar to service
objects on Kubernetes. It defines the access to a group of pods. The
federation service controller will create corresponding Kubernetes
service objects on underlying clusters. These are detailed in a
separate design document: [Federated Services](federated-services.md).
## Pod
In phase one we only support scheduling replication controllers. Pod
scheduling will be supported in later phase. This is primarily in
order to keep the Cluster Federation API compatible with the Kubernetes API.
## ACTIVITY FLOWS
## Scheduling
The below diagram shows how workloads are scheduled on the Cluster Federation control\
plane:
1. A replication controller is created by the client.
1. APIServer persists it into the storage.
1. Cluster controller periodically polls the latest available resource
metrics from the underlying clusters.
1. Scheduler is watching all pending RCs. It picks up the RC, make
policy-driven decisions and split it into different sub RCs.
1. Each cluster control is watching the sub RCs bound to its
corresponding cluster. It picks up the newly created sub RC.
1. The cluster controller issues requests to the underlying cluster
API Server to create the RC. In phase one we dont support complex
distribution policies. The scheduling rule is basically:
1. If a RC does not specify any nodeSelector, it will be scheduled
to the least loaded K8S cluster(s) that has enough available
resources.
1. If a RC specifies _N_ acceptable clusters in the
clusterSelector, all replica will be evenly distributed among
these clusters.
There is a potential race condition here. Say at time _T1_ the control
plane learns there are _m_ available resources in a K8S cluster. As
the cluster is working independently it still accepts workload
requests from other K8S clients or even another Cluster Federation control
plane. The Cluster Federation scheduling decision is based on this data of
available resources. However when the actual RC creation happens to
the cluster at time _T2_, the cluster may dont have enough resources
at that time. We will address this problem in later phases with some
proposed solutions like resource reservation mechanisms.
![Federated Scheduling](ubernetes-scheduling.png)
## Service Discovery
This part has been included in the section “Federated Service” of
document
“[Federated Cross-cluster Load Balancing and Service Discovery Requirements and System Design](federated-services.md))”.
Please refer to that document for details.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/design/federation-phase-1.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->

Some files were not shown because too many files have changed in this diff Show more