Add glide.yaml and vendor deps

This commit is contained in:
Dalton Hubble 2016-12-03 22:43:32 -08:00
parent db918f12ad
commit 5b3d5e81bd
18880 changed files with 5166045 additions and 1 deletions

8
vendor/k8s.io/kubernetes/cluster/juju/OWNERS generated vendored Normal file
View file

@ -0,0 +1,8 @@
assignees:
- chuckbutler
- mbruzek
- marcoceppi
- castrojo
owners:
- chuckbutler
- mbruzek

208
vendor/k8s.io/kubernetes/cluster/juju/bundles/README.md generated vendored Normal file
View file

@ -0,0 +1,208 @@
# kubernetes-bundle
The kubernetes-bundle allows you to deploy the many services of
Kubernetes to a cloud environment and get started using the Kubernetes
technology quickly.
## Kubernetes
Kubernetes is an open source system for managing containerized
applications. Kubernetes uses [Docker](http://docker.com) to run
containerized applications.
## Juju TL;DR
The [Juju](https://jujucharms.com) system provides provisioning and
orchestration across a variety of clouds and bare metal. A juju bundle
describes collection of services and how they interrelate. `juju
quickstart` allows you to bootstrap a deployment environment and
deploy a bundle.
## Dive in!
#### Install Juju Quickstart
You will need to
[install the Juju client](https://jujucharms.com/get-started) and
`juju-quickstart` as prerequisites. To deploy the bundle use
`juju-quickstart` which runs on Mac OS (`brew install
juju-quickstart`) or Ubuntu (`apt-get install juju-quickstart`).
### Deploy a Kubernetes Bundle
Use the 'juju quickstart' command to deploy a Kubernetes cluster to any cloud
supported by Juju.
The charm store version of the Kubernetes bundle can be deployed as follows:
juju quickstart u/kubernetes/kubernetes-cluster
> Note: The charm store bundle may be locked to a specific Kubernetes release.
Alternately you could deploy a Kubernetes bundle straight from github or a file:
juju quickstart -i https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml
The command above does few things for you:
- Starts a curses based gui for managing your cloud or MAAS credentials
- Looks for a bootstrapped deployment environment, and bootstraps if
required. This will launch a bootstrap node in your chosen
deployment environment (machine 0).
- Deploys the Juju GUI to your environment onto the bootstrap node.
- Provisions 4 machines, and deploys the Kubernetes services on top of
them (Kubernetes-master, two Kubernetes minions using flannel, and etcd).
- Orchestrates the relations among the services, and exits.
Now you should have a running Kubernetes. Run `juju status
--format=oneline` to see the address of your kubernetes-master unit.
For further reading on [Juju Quickstart](https://pypi.python.org/pypi/juju-quickstart)
Go to the [Getting started with Juju guide](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/juju.md)
for more information about deploying a development Kubernetes cluster.
### Using the Kubernetes Client
You'll need the Kubernetes command line client,
[kubectl](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/kubectl/kubectl.md)
to interact with the created cluster. The kubectl command is
installed on the kubernetes-master charm. If you want to work with
the cluster from your computer you will need to install the binary
locally.
You can access kubectl by a number ways using juju.
via juju run:
juju run --service kubernetes-master/0 "sudo kubectl get nodes"
via juju ssh:
juju ssh kubernetes-master/0 -t "sudo kubectl get nodes"
You may also SSH to the kubernetes-master unit (`juju ssh kubernetes-master/0`)
and call kubectl from the command prompt.
See the
[kubectl documentation](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/kubectl/kubectl.md)
for more details of what can be done with the command line tool.
### Scaling up the cluster
You can add capacity by adding more Docker units:
juju add-unit docker
### Known Limitations
Kubernetes currently has several platform specific functionality. For
example load balancers and persistence volumes only work with the
Google Compute provider at this time.
The Juju integration uses the Kubernetes null provider. This means
external load balancers and storage can't be directly driven through
Kubernetes config files at this time. We look forward to adding these
capabilities to the charms.
## More about the components the bundle deploys
### Kubernetes master
The master controls the Kubernetes cluster. It manages for the worker
nodes and provides the primary interface for control by the user.
### Kubernetes minion
The minions are the servers that perform the work. Minions must
communicate with the master and run the workloads that are assigned to
them.
### Flannel-docker
Flannel provides individual subnets for each machine in the cluster by
creating a
[software defined networking](http://en.wikipedia.org/wiki/Software-defined_networking).
### Docker
An open platform for distributed applications for developers and sysadmins.
### Etcd
Etcd persists state for Flannel and Kubernetes. It is a distributed
key-value store with an http interface.
## For further information on getting started with Juju
Juju has complete documentation with regard to setup, and cloud
configuration on it's own
[documentation site](https://jujucharms.com/docs/).
- [Getting Started](https://jujucharms.com/docs/stable/getting-started)
- [Using Juju](https://jujucharms.com/docs/stable/charms)
## Installing the kubectl outside of kubernetes-master unit
Download the Kubernetes release from:
https://github.com/kubernetes/kubernetes/releases and extract
the release, you can then just directly use the cli binary at
./kubernetes/platforms/linux/amd64/kubectl
You'll need the address of the kubernetes-master as environment variable :
juju status kubernetes-master/0
Grab the public-address there and export it as KUBERNETES_MASTER
environment variable :
export KUBERNETES_MASTER=$(juju status --format=oneline kubernetes-master | grep kubernetes-master | cut -d' ' -f3):8080
And now you can run kubectl on the command line :
kubectl get no
See the
[kubectl documentation](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/kubectl/kubectl.md)
for more details of what can be done with the command line tool.
## Hacking on the kubernetes-bundle and associated charms
The kubernetes-bundle is open source and available on github.com. If
you want to get started developing on the bundle you can clone it from
github.
git clone https://github.com/kubernetes/kubernetes.git
Go to the [Getting started with Juju guide](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/juju.md)
for more information about the bundle or charms.
## How to contribute
Send us pull requests! We'll send you a cookie if they include tests and docs.
## Current and Most Complete Information
The charms and bundles are in the [kubernetes](https://github.com/kubernetes/kubernetes)
repository in github.
- [kubernetes-master charm on GitHub](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/charms/trusty/kubernetes-master)
- [kubernetes charm on GitHub](https://github.com/kubernetes/kubernetes/tree/master/cluster/juju/charms/trusty/kubernetes)
More information about the
[Kubernetes project](https://github.com/kubernetes/kubernetes)
or check out the
[Kubernetes Documentation](https://github.com/kubernetes/kubernetes/tree/master/docs)
for more details about the Kubernetes concepts and terminology.
Having a problem? Check the [Kubernetes issues database](https://github.com/kubernetes/kubernetes/issues)
for related issues.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/juju/bundles/README.md?pixel)]()

View file

@ -0,0 +1,18 @@
services:
kubernetes:
charm: __CHARM_DIR__/builds/kubernetes
annotations:
"gui-x": "600"
"gui-y": "0"
expose: true
num_units: 2
etcd:
charm: cs:~containers/etcd
annotations:
"gui-x": "300"
"gui-y": "0"
num_units: 1
relations:
- - "kubernetes:etcd"
- "etcd:db"
series: xenial

View file

@ -0,0 +1,16 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

17
vendor/k8s.io/kubernetes/cluster/juju/config-test.sh generated vendored Normal file
View file

@ -0,0 +1,17 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
NUM_NODES=${NUM_NODES:-2}

31
vendor/k8s.io/kubernetes/cluster/juju/identify-leaders.py generated vendored Executable file
View file

@ -0,0 +1,31 @@
#!/usr/bin/env python
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from subprocess import check_output
import yaml
cmd = ['juju', 'run', '--application', 'kubernetes', '--format=yaml', 'is-leader']
out = check_output(cmd)
try:
parsed_output = yaml.safe_load(out)
for unit in parsed_output:
standard_out = unit['Stdout'].rstrip()
unit_id = unit['UnitId']
if 'True' in standard_out:
print(unit_id)
except:
pass

View file

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: kube-system

View file

@ -0,0 +1,112 @@
# kubernetes
[Kubernetes](https://github.com/kubernetes/kubernetes) is an open
source system for managing application containers across multiple hosts.
This version of Kubernetes uses [Docker](http://www.docker.io/) to package,
instantiate and run containerized applications.
This charm is an encapsulation of the
[Running Kubernetes locally via
Docker](http://kubernetes.io/docs/getting-started-guides/docker)
document. The released hyperkube image (`gcr.io/google_containers/hyperkube`)
is currently pulled from a [Google owned container repository
repository](https://cloud.google.com/container-registry/). For this charm to
work it will need access to the repository to `docker pull` the images.
This charm was built from other charm layers using the reactive framework. The
`layer:docker` is the base layer. For more information please read [Getting
Started Developing charms](https://jujucharms.com/docs/devel/developer-getting-started)
# Deployment
The kubernetes charms require a relation to a distributed key value store
(ETCD) which Kubernetes uses for persistent storage of all of its REST API
objects.
```
juju deploy etcd
juju deploy kubernetes
juju add-relation kubernetes etcd
```
# Configuration
For your convenience this charm supports some configuration options to set up
a Kubernetes cluster that works in your environment:
**version**: Set the version of the Kubernetes containers to deploy. The
version string must be in the following format "v#.#.#" where the numbers
match with the
[kubernetes release labels](https://github.com/kubernetes/kubernetes/releases)
of the [kubernetes github project](https://github.com/kubernetes/kubernetes).
Changing the version causes the all the Kubernetes containers to be restarted.
**cidr**: Set the IP range for the Kubernetes cluster. eg: 10.1.0.0/16
**dns_domain**: Set the DNS domain for the Kubernetes cluster.
# Storage
The kubernetes charm is built to handle multiple storage devices if the cloud
provider works with
[Juju storage](https://jujucharms.com/docs/devel/charms-storage).
The 16.04 (xenial) release introduced [ZFS](https://en.wikipedia.org/wiki/ZFS)
to Ubuntu. The xenial charm can use ZFS witha raidz pool. A raidz pool
distributes parity along with the data (similar to a raid5 pool) and can suffer
the loss of one drive while still retaining data. The raidz pool requires a
minimum of 3 disks, but will accept more if they are provided.
You can add storage to the kubernetes charm in increments of 3 or greater:
```
juju add-storage kubernetes/0 disk-pool=ebs,3,1G
```
**Note**: Due to a limitation of raidz you can not add individual disks to an
existing pool. Should you need to expand the storage of the raidz pool, the
additional add-storage commands must be the same number of disks as the original
command. At this point the charm will have two raidz pools added together, both
of which could handle the loss of one disk each.
The storage code handles the addition of devices to the charm and when it
receives three disks creates a raidz pool that is mounted at the /srv/kubernetes
directory by default. If you need the storage in another location you must
change the `mount-point` value in layer.yaml before the charms is deployed.
To avoid data loss you must attach the storage before making the connection to
the etcd cluster.
## State Events
While this charm is meant to be a top layer, it can be used to build other
solutions. This charm sets or removes states from the reactive framework that
other layers could react appropriately. The states that other layers would be
interested in are as follows:
**kubelet.available** - The hyperkube container has been run with the kubelet
service and configuration that started the apiserver, controller-manager and
scheduler containers.
**proxy.available** - The hyperkube container has been run with the proxy
service and configuration that handles Kubernetes networking.
**kubectl.package.created** - Indicates the availability of the `kubectl`
application along with the configuration needed to contact the cluster
securely. You will need to download the `/home/ubuntu/kubectl_package.tar.gz`
from the kubernetes leader unit to your machine so you can control the cluster.
**kubedns.available** - Indicates when the Domain Name System (DNS) for the
cluster is operational.
# Kubernetes information
- [Kubernetes github project](https://github.com/kubernetes/kubernetes)
- [Kubernetes issue tracker](https://github.com/kubernetes/kubernetes/issues)
- [Kubernetes Documenation](http://kubernetes.io/docs/)
- [Kubernetes releases](https://github.com/kubernetes/kubernetes/releases)
# Contact
* Charm Author: Matthew Bruzek <Matthew.Bruzek@canonical.com>
* Charm Contributor: Charles Butler <Charles.Butler@canonical.com>
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/juju/layers/kubernetes/README.md?pixel)]()

View file

@ -0,0 +1,2 @@
guestbook-example:
description: Launch the guestbook example in your k8s cluster

View file

@ -0,0 +1,35 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch the Guestbook example in Kubernetes. This will use the pod and service
# definitions from `files/guestbook-example/*.yaml` to launch a leader/follower
# redis cluster, with a web-front end to collect user data and store in redis.
# This example app can easily scale across multiple nodes, and exercises the
# networking, pod creation/scale, service definition, and replica controller of
# kubernetes.
#
# Lifted from github.com/kubernetes/kubernetes/examples/guestbook-example
set -e
if [ ! -d files/guestbook-example ]; then
mkdir -p files/guestbook-example
curl -o $CHARM_DIR/files/guestbook-example/guestbook-all-in-one.yaml https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml
fi
kubectl create -f files/guestbook-example/guestbook-all-in-one.yaml

View file

@ -0,0 +1,21 @@
options:
version:
type: string
default: "v1.2.3"
description: |
The version of Kubernetes to use in this charm. The version is inserted
in the configuration files that specify the hyperkube container to use
when starting a Kubernetes cluster. Changing this value will restart the
Kubernetes cluster.
cidr:
type: string
default: 10.1.0.0/16
description: |
Network CIDR to assign to Kubernetes service groups. This must not
overlap with any IP ranges assigned to nodes for pods.
dns_domain:
type: string
default: cluster.local
description: |
The domain name to use for the Kubernetes cluster by the
skydns service.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 76 KiB

View file

@ -0,0 +1,6 @@
includes: ['layer:leadership', 'layer:docker', 'layer:flannel', 'layer:storage', 'layer:tls', 'interface:etcd']
repo: https://github.com/mbruzek/layer-k8s.git
options:
storage:
storage-driver: zfs
mount-point: '/srv/kubernetes'

View file

@ -0,0 +1,19 @@
name: kubernetes
summary: Kubernetes is an application container orchestration platform.
maintainers:
- Matthew Bruzek <matthew.bruzek@canonical.com>
- Charles Butler <charles.butler@canonical.com>
description: |
Kubernetes is an open-source platform for deploying, scaling, and operations
of application containers across a cluster of hosts. Kubernetes is portable
in that it works with public, private, and hybrid clouds. Extensible through
a pluggable infrastructure. Self healing in that it will automatically
restart and place containers on healthy nodes if a node ever goes away.
tags:
- infrastructure
subordinate: false
requires:
etcd:
interface: etcd
series:
- xenial

View file

@ -0,0 +1,485 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from shlex import split
from subprocess import call
from subprocess import check_call
from subprocess import check_output
from charms.docker.compose import Compose
from charms.reactive import hook
from charms.reactive import remove_state
from charms.reactive import set_state
from charms.reactive import when
from charms.reactive import when_any
from charms.reactive import when_not
from charmhelpers.core import hookenv
from charmhelpers.core.hookenv import is_leader
from charmhelpers.core.hookenv import leader_set
from charmhelpers.core.hookenv import leader_get
from charmhelpers.core.templating import render
from charmhelpers.core import unitdata
from charmhelpers.core.host import chdir
import tlslib
@when('leadership.is_leader')
def i_am_leader():
'''The leader is the Kubernetes master node. '''
leader_set({'master-address': hookenv.unit_private_ip()})
@when_not('tls.client.authorization.required')
def configure_easrsa():
'''Require the tls layer to generate certificates with "clientAuth". '''
# By default easyrsa generates the server certificates without clientAuth
# Setting this state before easyrsa is configured ensures the tls layer is
# configured to generate certificates with client authentication.
set_state('tls.client.authorization.required')
domain = hookenv.config().get('dns_domain')
cidr = hookenv.config().get('cidr')
sdn_ip = get_sdn_ip(cidr)
# Create extra sans that the tls layer will add to the server cert.
extra_sans = [
sdn_ip,
'kubernetes',
'kubernetes.{0}'.format(domain),
'kubernetes.default',
'kubernetes.default.svc',
'kubernetes.default.svc.{0}'.format(domain)
]
unitdata.kv().set('extra_sans', extra_sans)
@hook('config-changed')
def config_changed():
'''If the configuration values change, remove the available states.'''
config = hookenv.config()
if any(config.changed(key) for key in config.keys()):
hookenv.log('The configuration options have changed.')
# Use the Compose class that encapsulates the docker-compose commands.
compose = Compose('files/kubernetes')
if is_leader():
hookenv.log('Removing master container and kubelet.available state.') # noqa
# Stop and remove the Kubernetes kubelet container.
compose.kill('master')
compose.rm('master')
compose.kill('proxy')
compose.rm('proxy')
# Remove the state so the code can react to restarting kubelet.
remove_state('kubelet.available')
else:
hookenv.log('Removing kubelet container and kubelet.available state.') # noqa
# Stop and remove the Kubernetes kubelet container.
compose.kill('kubelet')
compose.rm('kubelet')
# Remove the state so the code can react to restarting kubelet.
remove_state('kubelet.available')
hookenv.log('Removing proxy container and proxy.available state.')
# Stop and remove the Kubernetes proxy container.
compose.kill('proxy')
compose.rm('proxy')
# Remove the state so the code can react to restarting proxy.
remove_state('proxy.available')
if config.changed('version'):
hookenv.log('The version changed removing the states so the new '
'version of kubectl will be downloaded.')
remove_state('kubectl.downloaded')
remove_state('kubeconfig.created')
@when('tls.server.certificate available')
@when_not('k8s.server.certificate available')
def server_cert():
'''When the server certificate is available, get the server certificate
from the charm unitdata and write it to the kubernetes directory. '''
server_cert = '/srv/kubernetes/server.crt'
server_key = '/srv/kubernetes/server.key'
# Save the server certificate from unit data to the destination.
tlslib.server_cert(None, server_cert, user='ubuntu', group='ubuntu')
# Copy the server key from the default location to the destination.
tlslib.server_key(None, server_key, user='ubuntu', group='ubuntu')
set_state('k8s.server.certificate available')
@when('tls.client.certificate available')
@when_not('k8s.client.certficate available')
def client_cert():
'''When the client certificate is available, get the client certificate
from the charm unitdata and write it to the kubernetes directory. '''
client_cert = '/srv/kubernetes/client.crt'
client_key = '/srv/kubernetes/client.key'
# Save the client certificate from the default location to the destination.
tlslib.client_cert(None, client_cert, user='ubuntu', group='ubuntu')
# Copy the client key from the default location to the destination.
tlslib.client_key(None, client_key, user='ubuntu', group='ubuntu')
set_state('k8s.client.certficate available')
@when('tls.certificate.authority available')
@when_not('k8s.certificate.authority available')
def ca():
'''When the Certificate Authority is available, copy the CA from the
default location to the /srv/kubernetes directory. '''
ca_crt = '/srv/kubernetes/ca.crt'
# Copy the Certificate Authority to the destination directory.
tlslib.ca(None, ca_crt, user='ubuntu', group='ubuntu')
set_state('k8s.certificate.authority available')
@when('kubelet.available', 'leadership.is_leader')
@when_not('kubedns.available', 'skydns.available')
def launch_dns():
'''Create the "kube-system" namespace, the kubedns resource controller,
and the kubedns service. '''
hookenv.log('Creating kubernetes kubedns on the master node.')
# Only launch and track this state on the leader.
# Launching duplicate kubeDNS rc will raise an error
# Run a command to check if the apiserver is responding.
return_code = call(split('kubectl cluster-info'))
if return_code != 0:
hookenv.log('kubectl command failed, waiting for apiserver to start.')
remove_state('kubedns.available')
# Return without setting kubedns.available so this method will retry.
return
# Check for the "kube-system" namespace.
return_code = call(split('kubectl get namespace kube-system'))
if return_code != 0:
# Create the kube-system namespace that is used by the kubedns files.
check_call(split('kubectl create namespace kube-system'))
# Check for the kubedns replication controller.
return_code = call(split('kubectl get -f files/manifests/kubedns-rc.yaml'))
if return_code != 0:
# Create the kubedns replication controller from the rendered file.
check_call(split('kubectl create -f files/manifests/kubedns-rc.yaml'))
# Check for the kubedns service.
return_code = call(split('kubectl get -f files/manifests/kubedns-svc.yaml'))
if return_code != 0:
# Create the kubedns service from the rendered file.
check_call(split('kubectl create -f files/manifests/kubedns-svc.yaml'))
set_state('kubedns.available')
@when('skydns.available', 'leadership.is_leader')
def convert_to_kubedns():
'''Delete the skydns containers to make way for the kubedns containers.'''
hookenv.log('Deleteing the old skydns deployment.')
# Delete the skydns replication controller.
return_code = call(split('kubectl delete rc kube-dns-v11'))
# Delete the skydns service.
return_code = call(split('kubectl delete svc kube-dns'))
remove_state('skydns.available')
@when('docker.available')
@when_not('etcd.available')
def relation_message():
'''Take over messaging to let the user know they are pending a relationship
to the ETCD cluster before going any further. '''
status_set('waiting', 'Waiting for relation to ETCD')
@when('kubeconfig.created')
@when('etcd.available')
@when_not('kubelet.available', 'proxy.available')
def start_kubelet(etcd):
'''Run the hyperkube container that starts the kubernetes services.
When the leader, run the master services (apiserver, controller, scheduler,
proxy)
using the master.json from the rendered manifest directory.
When a follower, start the node services (kubelet, and proxy). '''
render_files(etcd)
# Use the Compose class that encapsulates the docker-compose commands.
compose = Compose('files/kubernetes')
status_set('maintenance', 'Starting the Kubernetes services.')
if is_leader():
compose.up('master')
compose.up('proxy')
set_state('kubelet.available')
# Open the secure port for api-server.
hookenv.open_port(6443)
else:
# Start the Kubernetes kubelet container using docker-compose.
compose.up('kubelet')
set_state('kubelet.available')
# Start the Kubernetes proxy container using docker-compose.
compose.up('proxy')
set_state('proxy.available')
status_set('active', 'Kubernetes services started')
@when('docker.available')
@when_not('kubectl.downloaded')
def download_kubectl():
'''Download the kubectl binary to test and interact with the cluster.'''
status_set('maintenance', 'Downloading the kubectl binary')
version = hookenv.config()['version']
cmd = 'wget -nv -O /usr/local/bin/kubectl https://storage.googleapis.com' \
'/kubernetes-release/release/{0}/bin/linux/{1}/kubectl'
cmd = cmd.format(version, arch())
hookenv.log('Downloading kubelet: {0}'.format(cmd))
check_call(split(cmd))
cmd = 'chmod +x /usr/local/bin/kubectl'
check_call(split(cmd))
set_state('kubectl.downloaded')
@when('kubectl.downloaded', 'leadership.is_leader', 'k8s.certificate.authority available', 'k8s.client.certficate available') # noqa
@when_not('kubeconfig.created')
def master_kubeconfig():
'''Create the kubernetes configuration for the master unit. The master
should create a package with the client credentials so the user can
interact securely with the apiserver.'''
hookenv.log('Creating Kubernetes configuration for master node.')
directory = '/srv/kubernetes'
ca = '/srv/kubernetes/ca.crt'
key = '/srv/kubernetes/client.key'
cert = '/srv/kubernetes/client.crt'
# Get the public address of the apiserver so users can access the master.
server = 'https://{0}:{1}'.format(hookenv.unit_public_ip(), '6443')
# Create the client kubeconfig so users can access the master node.
create_kubeconfig(directory, server, ca, key, cert)
# Copy the kubectl binary to this directory.
cmd = 'cp -v /usr/local/bin/kubectl {0}'.format(directory)
check_call(split(cmd))
# Use a context manager to run the tar command in a specific directory.
with chdir(directory):
# Create a package with kubectl and the files to use it externally.
cmd = 'tar -cvzf /home/ubuntu/kubectl_package.tar.gz ca.crt ' \
'client.key client.crt kubectl kubeconfig'
check_call(split(cmd))
# This sets up the client workspace consistently on the leader and nodes.
node_kubeconfig()
set_state('kubeconfig.created')
@when('kubectl.downloaded', 'k8s.certificate.authority available', 'k8s.server.certificate available') # noqa
@when_not('kubeconfig.created', 'leadership.is_leader')
def node_kubeconfig():
'''Create the kubernetes configuration (kubeconfig) for this unit.
The the nodes will create a kubeconfig with the server credentials so
the services can interact securely with the apiserver.'''
hookenv.log('Creating Kubernetes configuration for worker node.')
directory = '/var/lib/kubelet'
ca = '/srv/kubernetes/ca.crt'
cert = '/srv/kubernetes/server.crt'
key = '/srv/kubernetes/server.key'
# Get the private address of the apiserver for communication between units.
server = 'https://{0}:{1}'.format(leader_get('master-address'), '6443')
# Create the kubeconfig for the other services.
kubeconfig = create_kubeconfig(directory, server, ca, key, cert)
# Install the kubeconfig in the root user's home directory.
install_kubeconfig(kubeconfig, '/root/.kube', 'root')
# Install the kubeconfig in the ubunut user's home directory.
install_kubeconfig(kubeconfig, '/home/ubuntu/.kube', 'ubuntu')
set_state('kubeconfig.created')
@when('proxy.available')
@when_not('cadvisor.available')
def start_cadvisor():
'''Start the cAdvisor container that gives metrics about the other
application containers on this system. '''
compose = Compose('files/kubernetes')
compose.up('cadvisor')
hookenv.open_port(8088)
status_set('active', 'cadvisor running on port 8088')
set_state('cadvisor.available')
@when('kubelet.available', 'kubeconfig.created')
@when_any('proxy.available', 'cadvisor.available', 'kubedns.available')
def final_message():
'''Issue some final messages when the services are started. '''
# TODO: Run a simple/quick health checks before issuing this message.
status_set('active', 'Kubernetes running.')
def gather_sdn_data():
'''Get the Software Defined Network (SDN) information and return it as a
dictionary. '''
sdn_data = {}
# The dictionary named 'pillar' is a construct of the k8s template files.
pillar = {}
# SDN Providers pass data via the unitdata.kv module
db = unitdata.kv()
# Ideally the DNS address should come from the sdn cidr.
subnet = db.get('sdn_subnet')
if subnet:
# Generate the DNS ip address on the SDN cidr (this is desired).
pillar['dns_server'] = get_dns_ip(subnet)
else:
# There is no SDN cider fall back to the kubernetes config cidr option.
pillar['dns_server'] = get_dns_ip(hookenv.config().get('cidr'))
# The pillar['dns_domain'] value is used in the kubedns-rc.yaml
pillar['dns_domain'] = hookenv.config().get('dns_domain')
# Use a 'pillar' dictionary so we can reuse the upstream kubedns templates.
sdn_data['pillar'] = pillar
return sdn_data
def install_kubeconfig(kubeconfig, directory, user):
'''Copy the a file from the target to a new directory creating directories
if necessary. '''
# The file and directory must be owned by the correct user.
chown = 'chown {0}:{0} {1}'
if not os.path.isdir(directory):
os.makedirs(directory)
# Change the ownership of the config file to the right user.
check_call(split(chown.format(user, directory)))
# kubectl looks for a file named "config" in the ~/.kube directory.
config = os.path.join(directory, 'config')
# Copy the kubeconfig file to the directory renaming it to "config".
cmd = 'cp -v {0} {1}'.format(kubeconfig, config)
check_call(split(cmd))
# Change the ownership of the config file to the right user.
check_call(split(chown.format(user, config)))
def create_kubeconfig(directory, server, ca, key, cert, user='ubuntu'):
'''Create a configuration for kubernetes in a specific directory using
the supplied arguments, return the path to the file.'''
context = 'default-context'
cluster_name = 'kubernetes'
# Ensure the destination directory exists.
if not os.path.isdir(directory):
os.makedirs(directory)
# The configuration file should be in this directory named kubeconfig.
kubeconfig = os.path.join(directory, 'kubeconfig')
# Create the config file with the address of the master server.
cmd = 'kubectl config set-cluster --kubeconfig={0} {1} ' \
'--server={2} --certificate-authority={3}'
check_call(split(cmd.format(kubeconfig, cluster_name, server, ca)))
# Create the credentials using the client flags.
cmd = 'kubectl config set-credentials --kubeconfig={0} {1} ' \
'--client-key={2} --client-certificate={3}'
check_call(split(cmd.format(kubeconfig, user, key, cert)))
# Create a default context with the cluster.
cmd = 'kubectl config set-context --kubeconfig={0} {1} ' \
'--cluster={2} --user={3}'
check_call(split(cmd.format(kubeconfig, context, cluster_name, user)))
# Make the config use this new context.
cmd = 'kubectl config use-context --kubeconfig={0} {1}'
check_call(split(cmd.format(kubeconfig, context)))
hookenv.log('kubectl configuration created at {0}.'.format(kubeconfig))
return kubeconfig
def get_dns_ip(cidr):
'''Get an IP address for the DNS server on the provided cidr.'''
# Remove the range from the cidr.
ip = cidr.split('/')[0]
# Take the last octet off the IP address and replace it with 10.
return '.'.join(ip.split('.')[0:-1]) + '.10'
def get_sdn_ip(cidr):
'''Get the IP address for the SDN gateway based on the provided cidr.'''
# Remove the range from the cidr.
ip = cidr.split('/')[0]
# Remove the last octet and replace it with 1.
return '.'.join(ip.split('.')[0:-1]) + '.1'
def render_files(reldata=None):
'''Use jinja templating to render the docker-compose.yml and master.json
file to contain the dynamic data for the configuration files.'''
context = {}
# Load the context data with SDN data.
context.update(gather_sdn_data())
# Add the charm configuration data to the context.
context.update(hookenv.config())
if reldata:
connection_string = reldata.get_connection_string()
# Define where the etcd tls files will be kept.
etcd_dir = '/etc/ssl/etcd'
# Create paths to the etcd client ca, key, and cert file locations.
ca = os.path.join(etcd_dir, 'client-ca.pem')
key = os.path.join(etcd_dir, 'client-key.pem')
cert = os.path.join(etcd_dir, 'client-cert.pem')
# Save the client credentials (in relation data) to the paths provided.
reldata.save_client_credentials(key, cert, ca)
# Update the context so the template has the etcd information.
context.update({'etcd_dir': etcd_dir,
'connection_string': connection_string,
'etcd_ca': ca,
'etcd_key': key,
'etcd_cert': cert})
charm_dir = hookenv.charm_dir()
rendered_kube_dir = os.path.join(charm_dir, 'files/kubernetes')
if not os.path.exists(rendered_kube_dir):
os.makedirs(rendered_kube_dir)
rendered_manifest_dir = os.path.join(charm_dir, 'files/manifests')
if not os.path.exists(rendered_manifest_dir):
os.makedirs(rendered_manifest_dir)
# Update the context with extra values, arch, manifest dir, and private IP.
context.update({'arch': arch(),
'master_address': leader_get('master-address'),
'manifest_directory': rendered_manifest_dir,
'public_address': hookenv.unit_get('public-address'),
'private_address': hookenv.unit_get('private-address')})
# Adapted from: http://kubernetes.io/docs/getting-started-guides/docker/
target = os.path.join(rendered_kube_dir, 'docker-compose.yml')
# Render the files/kubernetes/docker-compose.yml file that contains the
# definition for kubelet and proxy.
render('docker-compose.yml', target, context)
if is_leader():
# Source: https://github.com/kubernetes/...master/cluster/images/hyperkube # noqa
target = os.path.join(rendered_manifest_dir, 'master.json')
# Render the files/manifests/master.json that contains parameters for
# the apiserver, controller, and controller-manager
render('master.json', target, context)
# Source: ...cluster/addons/dns/skydns-svc.yaml.in
target = os.path.join(rendered_manifest_dir, 'kubedns-svc.yaml')
# Render files/kubernetes/kubedns-svc.yaml for the DNS service.
render('kubedns-svc.yaml', target, context)
# Source: ...cluster/addons/dns/skydns-rc.yaml.in
target = os.path.join(rendered_manifest_dir, 'kubedns-rc.yaml')
# Render files/kubernetes/kubedns-rc.yaml for the DNS pod.
render('kubedns-rc.yaml', target, context)
def status_set(level, message):
'''Output status message with leadership information.'''
if is_leader():
message = '{0} (master) '.format(message)
hookenv.status_set(level, message)
def arch():
'''Return the package architecture as a string. Raise an exception if the
architecture is not supported by kubernetes.'''
# Get the package architecture for this system.
architecture = check_output(['dpkg', '--print-architecture']).rstrip()
# Convert the binary result into a string.
architecture = architecture.decode('utf-8')
# Validate the architecture is supported by kubernetes.
if architecture not in ['amd64', 'arm', 'arm64', 'ppc64le']:
message = 'Unsupported machine architecture: {0}'.format(architecture)
status_set('blocked', message)
raise Exception(message)
return architecture

View file

@ -0,0 +1,134 @@
# http://kubernetes.io/docs/getting-started-guides/docker/
# # Start kubelet and then start master components as pods
# docker run \
# --net=host \
# --pid=host \
# --privileged \
# --restart=on-failure \
# -d \
# -v /sys:/sys:ro \
# -v /var/run:/var/run:rw \
# -v /:/rootfs:ro \
# -v /var/lib/docker/:/var/lib/docker:rw \
# -v /var/lib/kubelet/:/var/lib/kubelet:rw \
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
# /hyperkube kubelet \
# --address=0.0.0.0 \
# --allow-privileged=true \
# --enable-server \
# --api-servers=http://localhost:8080 \
# --config=/etc/kubernetes/manifests-multi \
# --cluster-dns=10.0.0.10 \
# --cluster-domain=cluster.local \
# --containerized \
# --v=2
master:
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
net: host
pid: host
privileged: true
restart: always
volumes:
- /:/rootfs:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:rw
- /var/lib/kubelet/:/var/lib/kubelet:rw
- /var/run:/var/run:rw
- {{ manifest_directory }}:/etc/kubernetes/manifests:rw
- /srv/kubernetes:/srv/kubernetes
command: |
/hyperkube kubelet
--address="0.0.0.0"
--allow-privileged=true
--api-servers=http://localhost:8080
--cluster-dns={{ pillar['dns_server'] }}
--cluster-domain={{ pillar['dns_domain'] }}
--config=/etc/kubernetes/manifests
--containerized
--hostname-override="{{ private_address }}"
--tls-cert-file="/srv/kubernetes/server.crt"
--tls-private-key-file="/srv/kubernetes/server.key"
--v=2
# Start kubelet without the config option and only kubelet starts.
# kubelet gets the tls credentials from /var/lib/kubelet/kubeconfig
# docker run \
# --net=host \
# --pid=host \
# --privileged \
# --restart=on-failure \
# -d \
# -v /sys:/sys:ro \
# -v /var/run:/var/run:rw \
# -v /:/rootfs:ro \
# -v /var/lib/docker/:/var/lib/docker:rw \
# -v /var/lib/kubelet/:/var/lib/kubelet:rw \
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
# /hyperkube kubelet \
# --allow-privileged=true \
# --api-servers=http://${MASTER_IP}:8080 \
# --address=0.0.0.0 \
# --enable-server \
# --cluster-dns=10.0.0.10 \
# --cluster-domain=cluster.local \
# --containerized \
# --v=2
kubelet:
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
net: host
pid: host
privileged: true
restart: always
volumes:
- /:/rootfs:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:rw
- /var/lib/kubelet/:/var/lib/kubelet:rw
- /var/run:/var/run:rw
- /srv/kubernetes:/srv/kubernetes
command: |
/hyperkube kubelet
--address="0.0.0.0"
--allow-privileged=true
--api-servers=https://{{ master_address }}:6443
--cluster-dns={{ pillar['dns_server'] }}
--cluster-domain={{ pillar['dns_domain'] }}
--containerized
--hostname-override="{{ private_address }}"
--v=2
# docker run \
# -d \
# --net=host \
# --privileged \
# --restart=on-failure \
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
# /hyperkube proxy \
# --master=http://${MASTER_IP}:8080 \
# --v=2
proxy:
net: host
privileged: true
restart: always
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
command: |
/hyperkube proxy
--master=http://{{ master_address }}:8080
--v=2
# cAdvisor (Container Advisor) provides container users an understanding of
# the resource usage and performance characteristics of their running containers.
cadvisor:
image: google/cadvisor:latest
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
ports:
- 8088:8080
restart: always

View file

@ -0,0 +1,163 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Warning: This is a file generated from the base underscore template file: skydns-rc.yaml.base
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-{{ arch }}:1.9
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /healthz-kubedns
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain={{ pillar['dns_domain'] }}.
- --dns-port=10053
- --config-map=kube-dns
- --v=2
- --kube_master_url=http://{{ private_address }}:8080
{{ pillar['federations_domain_map'] }}
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-{{ arch }}:1.4
livenessProbe:
httpGet:
path: /healthz-dnsmasq
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: dnsmasq-metrics
image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 10Mi
- name: healthz
image: gcr.io/google_containers/exechealthz-{{ arch }}:1.2
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
# Note that this container shouldn't really need 50Mi of memory. The
# limits are set higher than expected pending investigation on #29688.
# The extra memory was stolen from the kubedns container to keep the
# net memory requested by the pod constant.
memory: 50Mi
args:
- --cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null
- --url=/healthz-dnsmasq
- --cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1:10053 >/dev/null
- --url=/healthz-kubedns
- --port=8080
- --quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default # Don't use cluster DNS.

View file

@ -0,0 +1,38 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file should be kept in sync with cluster/images/hyperkube/dns-svc.yaml
# Warning: This is a file generated from the base underscore template file: skydns-svc.yaml.base
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ pillar['dns_server'] }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View file

@ -0,0 +1,106 @@
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {"name":"k8s-master"},
"spec":{
"hostNetwork": true,
"containers":[
{
"name": "controller-manager",
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
"command": [
"/hyperkube",
"controller-manager",
"--master=127.0.0.1:8080",
"--service-account-private-key-file=/srv/kubernetes/server.key",
"--root-ca-file=/srv/kubernetes/ca.crt",
"--min-resync-period=3m",
"--v=2"
],
"volumeMounts": [
{
"name": "data",
"mountPath": "/srv/kubernetes"
}
]
},
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
"command": [
"/hyperkube",
"apiserver",
"--service-cluster-ip-range={{ cidr }}",
"--insecure-bind-address=0.0.0.0",
{% if etcd_dir -%}
"--etcd-cafile={{ etcd_ca }}",
"--etcd-keyfile={{ etcd_key }}",
"--etcd-certfile={{ etcd_cert }}",
{%- endif %}
"--etcd-servers={{ connection_string }}",
"--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",
"--client-ca-file=/srv/kubernetes/ca.crt",
"--basic-auth-file=/srv/kubernetes/basic_auth.csv",
"--min-request-timeout=300",
"--tls-cert-file=/srv/kubernetes/server.crt",
"--tls-private-key-file=/srv/kubernetes/server.key",
"--token-auth-file=/srv/kubernetes/known_tokens.csv",
"--allow-privileged=true",
"--v=4"
],
"volumeMounts": [
{
"name": "data",
"mountPath": "/srv/kubernetes"
},
{% if etcd_dir -%}
{
"name": "etcd-tls",
"mountPath": "{{ etcd_dir }}"
}
{%- endif %}
]
},
{
"name": "scheduler",
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
"command": [
"/hyperkube",
"scheduler",
"--master=127.0.0.1:8080",
"--v=2"
]
},
{
"name": "setup",
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
"command": [
"/setup-files.sh",
"IP:{{ private_address }},IP:{{ public_address }},DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local"
],
"volumeMounts": [
{
"name": "data",
"mountPath": "/data"
}
]
}
],
"volumes": [
{
"hostPath": {
"path": "/srv/kubernetes"
},
"name": "data"
},
{% if etcd_dir -%}
{
"hostPath": {
"path": "{{ etcd_dir }}"
},
"name": "etcd-tls"
}
{%- endif %}
]
}
}

View file

@ -0,0 +1,5 @@
tests: "*kubernetes*"
bootstrap: false
reset: false
python_packages:
- tox

View file

@ -0,0 +1,48 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
function check_for_ppa() {
local repo="$1"
grep -qsw $repo /etc/apt/sources.list /etc/apt/sources.list.d/*
}
function package_status() {
local pkgname=$1
local pkgstatus
pkgstatus=$(dpkg-query -W --showformat='${Status}\n' "${pkgname}")
if [[ "${pkgstatus}" != "install ok installed" ]]; then
echo "Missing package ${pkgname}"
sudo apt-get --force-yes --yes install ${pkgname}
fi
}
function gather_installation_reqs() {
if ! check_for_ppa "juju"; then
echo "... Detected missing dependencies.. running"
echo "... add-apt-repository ppa:juju/stable"
sudo add-apt-repository -y ppa:juju/stable
sudo apt-get update
fi
package_status 'juju'
package_status 'charm-tools'
}

29
vendor/k8s.io/kubernetes/cluster/juju/return-node-ips.py generated vendored Executable file
View file

@ -0,0 +1,29 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import sys
# This script helps parse out the private IP addresses from the
# `juju run` command's JSON object, see cluster/juju/util.sh
if len(sys.argv) > 1:
# It takes the JSON output as the first argument.
nodes = json.loads(sys.argv[1])
# There can be multiple nodes to print the Stdout.
for num in nodes:
print num['Stdout'].rstrip()
else:
exit(1)

150
vendor/k8s.io/kubernetes/cluster/juju/util.sh generated vendored Executable file
View file

@ -0,0 +1,150 @@
#!/bin/bash
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -o errexit
set -o nounset
set -o pipefail
#set -o xtrace
UTIL_SCRIPT=$(readlink -m "${BASH_SOURCE}")
JUJU_PATH=$(dirname ${UTIL_SCRIPT})
KUBE_ROOT=$(readlink -m ${JUJU_PATH}/../../)
# Use the config file specified in $KUBE_CONFIG_FILE, or config-default.sh.
source "${JUJU_PATH}/${KUBE_CONFIG_FILE-config-default.sh}"
# This attempts installation of Juju - This really needs to support multiple
# providers/distros - but I'm super familiar with ubuntu so assume that for now.
source "${JUJU_PATH}/prereqs/ubuntu-juju.sh"
export JUJU_REPOSITORY="${JUJU_PATH}/charms"
KUBE_BUNDLE_PATH="${JUJU_PATH}/bundles/local.yaml"
# The directory for the kubectl binary, this is one of the paths in kubectl.sh.
KUBECTL_DIR="${KUBE_ROOT}/platforms/linux/amd64"
function build-local() {
# This used to build the kubernetes project. Now it rebuilds the charm(s)
# living in `cluster/juju/layers`
charm build ${JUJU_PATH}/layers/kubernetes -o $JUJU_REPOSITORY -r --no-local-layers
}
function detect-master() {
local kubestatus
# Capturing a newline, and my awk-fu was weak - pipe through tr -d
kubestatus=$(juju status --format=oneline kubernetes | grep ${KUBE_MASTER_NAME} | awk '{print $3}' | tr -d "\n")
export KUBE_MASTER_IP=${kubestatus}
export KUBE_SERVER=https://${KUBE_MASTER_IP}:6433
}
function detect-nodes() {
# Run the Juju command that gets the minion private IP addresses.
local ipoutput
ipoutput=$(juju run --application kubernetes "unit-get private-address" --format=json)
# [
# {"MachineId":"2","Stdout":"192.168.122.188\n","UnitId":"kubernetes/0"},
# {"MachineId":"3","Stdout":"192.168.122.166\n","UnitId":"kubernetes/1"}
# ]
# Strip out the IP addresses
export KUBE_NODE_IP_ADDRESSES=($(${JUJU_PATH}/return-node-ips.py "${ipoutput}"))
# echo "Kubernetes minions: " ${KUBE_NODE_IP_ADDRESSES[@]} 1>&2
export NUM_NODES=${#KUBE_NODE_IP_ADDRESSES[@]}
}
function kube-up() {
build-local
# Replace the charm directory in the bundle.
sed "s|__CHARM_DIR__|${JUJU_REPOSITORY}|" < ${KUBE_BUNDLE_PATH}.base > ${KUBE_BUNDLE_PATH}
# The juju-deployer command will deploy the bundle and can be run
# multiple times to continue deploying the parts that fail.
juju deploy ${KUBE_BUNDLE_PATH}
source "${KUBE_ROOT}/cluster/common.sh"
# Sleep due to juju bug http://pad.lv/1432759
sleep-status
detect-master
detect-nodes
# Copy kubectl, the cert and key to this machine from master.
(
umask 077
mkdir -p ${KUBECTL_DIR}
juju scp ${KUBE_MASTER_NAME}:kubectl_package.tar.gz ${KUBECTL_DIR}
tar xfz ${KUBECTL_DIR}/kubectl_package.tar.gz -C ${KUBECTL_DIR}
)
# Export the location of the kubectl configuration file.
export KUBECONFIG="${KUBECTL_DIR}/kubeconfig"
}
function kube-down() {
local force="${1-}"
local jujuenv
jujuenv=$(juju switch)
juju destroy-model ${jujuenv} ${force} || true
# Clean up the generated charm files.
rm -rf ${KUBE_ROOT}/cluster/juju/charms
# Clean up the kubectl binary and config file.
rm -rf ${KUBECTL_DIR}
}
function prepare-e2e() {
echo "prepare-e2e() The Juju provider does not need any preparations for e2e." 1>&2
}
function sleep-status() {
local i
local maxtime
local jujustatus
i=0
maxtime=900
jujustatus=''
echo "Waiting up to 15 minutes to allow the cluster to come online... wait for it..." 1>&2
while [[ $i < $maxtime && -z $jujustatus ]]; do
sleep 15
i=$((i + 15))
jujustatus=$(${JUJU_PATH}/identify-leaders.py)
export KUBE_MASTER_NAME=${jujustatus}
done
}
# Execute prior to running tests to build a release if required for environment.
function test-build-release {
echo "test-build-release() " 1>&2
}
# Execute prior to running tests to initialize required structure. This is
# called from hack/e2e.go only when running -up.
function test-setup {
"${KUBE_ROOT}/cluster/kube-up.sh"
}
# Execute after running tests to perform any required clean-up. This is called
# from hack/e2e.go
function test-teardown() {
kube-down "-y"
}
# Verify the prerequisites are statisfied before running.
function verify-prereqs() {
gather_installation_reqs
}