Skip to content

Commit

Permalink
Merge pull request #9739 from SvenDowideit/post-1.4.1-docs-update-2
Browse files Browse the repository at this point in the history
Post 1.4.1 docs update 2
  • Loading branch information
Fred Lifton committed Jan 27, 2015
2 parents 0646589 + 71194c2 commit 359c74c
Show file tree
Hide file tree
Showing 86 changed files with 2,292 additions and 1,208 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -28,3 +28,4 @@ docs/AWS_S3_BUCKET
docs/GIT_BRANCH
docs/VERSION
docs/GITCOMMIT
docs/changed-files
8 changes: 6 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ DOCKER_DOCS_IMAGE := docker-docs$(if $(GIT_BRANCH),:$(GIT_BRANCH))

DOCKER_RUN_DOCKER := docker run --rm -it --privileged $(DOCKER_ENVS) $(DOCKER_MOUNT) "$(DOCKER_IMAGE)"

DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE

# for some docs workarounds (see below in "docs-build" target)
GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null)
Expand All @@ -53,7 +53,10 @@ docs-shell: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash

docs-release: docs-build
$(DOCKER_RUN_DOCS) -e OPTIONS -e BUILD_ROOT "$(DOCKER_DOCS_IMAGE)" ./release.sh
$(DOCKER_RUN_DOCS) -e OPTIONS -e BUILD_ROOT -e DISTRIBUTION_ID "$(DOCKER_DOCS_IMAGE)" ./release.sh

docs-test: docs-build
$(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)" ./test.sh

test: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary cross test-unit test-integration test-integration-cli
Expand All @@ -77,6 +80,7 @@ build: bundles
docker build -t "$(DOCKER_IMAGE)" .

docs-build:
( git remote | grep -v upstream ) || git diff --name-status upstream/release..upstream/docs docs/ > docs/changed-files
cp ./VERSION docs/VERSION
echo "$(GIT_BRANCH)" > docs/GIT_BRANCH
echo "$(AWS_S3_BUCKET)" > docs/AWS_S3_BUCKET
Expand Down
3 changes: 2 additions & 1 deletion api/client/commands.go
Original file line number Diff line number Diff line change
Expand Up @@ -1340,7 +1340,7 @@ func (cli *DockerCli) CmdImages(args ...string) error {
flTree := cmd.Bool([]string{"#t", "#tree", "#-tree"}, false, "Output graph in tree format")

flFilter := opts.NewListOpts(nil)
cmd.Var(&flFilter, []string{"f", "-filter"}, "Provide filter values (i.e. 'dangling=true')")
cmd.Var(&flFilter, []string{"f", "-filter"}, "Provide filter values (i.e., 'dangling=true')")

if err := cmd.Parse(args); err != nil {
return nil
Expand Down Expand Up @@ -1788,6 +1788,7 @@ func (cli *DockerCli) CmdEvents(args ...string) error {

flFilter := opts.NewListOpts(nil)
cmd.Var(&flFilter, []string{"f", "-filter"}, "Provide filter values (i.e. 'event=stop')")
cmd.Var(&flFilter, []string{"f", "-filter"}, "Provide filter values (i.e., 'event=stop')")

if err := cmd.Parse(args); err != nil {
return nil
Expand Down
1 change: 1 addition & 0 deletions contrib/completion/fish/docker.fish
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,7 @@ complete -c docker -A -f -n '__fish_seen_subcommand_from history' -a '(__fish_pr
# images
complete -c docker -f -n '__fish_docker_no_subcommand' -a images -d 'List images'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s a -l all -d 'Show all images (by default filter out the intermediate image layers)'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s f -l filter -d "Provide filter values (i.e., 'dangling=true')"
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s q -l quiet -d 'Only show numeric IDs'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s t -l tree -d 'Output graph in tree format'
Expand Down
2 changes: 2 additions & 0 deletions docs/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -56,4 +56,6 @@ RUN VERSION=$(cat VERSION) \

EXPOSE 8000

RUN cd sources && rgrep --files-with-matches '{{ include ".*" }}' | xargs sed -i~ 's/{{ include "\(.*\)" }}/cat include\/\1/ge'

CMD ["mkdocs", "serve"]
13 changes: 10 additions & 3 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,11 @@ In the root of the `docker` source directory:
If you have any issues you need to debug, you can use `make docs-shell` and then
run `mkdocs serve`

## Testing the links

You can use `make docs-test` to generate a report of missing links that are referenced in
the documentation - there should be none.

## Adding a new document

New document (`.md`) files are added to the documentation builds by adding them
Expand Down Expand Up @@ -140,11 +145,13 @@ to view your results and make sure what you published is what you wanted.

When you're happy with it, publish the docs to our live site:

make AWS_S3_BUCKET=docs.docker.com BUILD_ROOT=yes docs-release
make AWS_S3_BUCKET=docs.docker.com BUILD_ROOT=yes DISTRIBUTION_ID=C2K6......FL2F docs-release

Test the uncached version of the live docs at http://docs.docker.com.s3-website-us-east-1.amazonaws.com/

Note that the new docs will not appear live on the site until the cache (a complex,
distributed CDN system) is flushed. This requires someone with S3 keys. Contact Docker
(Sven Dowideit or John Costa) for assistance.
distributed CDN system) is flushed. The `make docs-release` command will do this
_if_ the `DISTRIBUTION_ID` is set to the Cloudfront distribution ID (ask the meta
team) - this will take at least 15 minutes to run and you can check its progress
with the CDN Cloudfront Chrome addin.

79 changes: 79 additions & 0 deletions docs/docvalidate.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
#!/usr/bin/env python

""" I honestly don't even know how the hell this works, just use it. """
__author__ = "Scott Stamp <scott@hypermine.com>"

from HTMLParser import HTMLParser
from urlparse import urljoin
from sys import setrecursionlimit
import re
import requests

setrecursionlimit(10000)
root = 'http://localhost:8000'


class DataHolder:

def __init__(self, value=None, attr_name='value'):
self._attr_name = attr_name
self.set(value)

def __call__(self, value):
return self.set(value)

def set(self, value):
setattr(self, self._attr_name, value)
return value

def get(self):
return getattr(self, self._attr_name)


class Parser(HTMLParser):
global root

ids = set()
crawled = set()
anchors = {}
pages = set()
save_match = DataHolder(attr_name='match')

def __init__(self, origin):
self.origin = origin
HTMLParser.__init__(self)

def handle_starttag(self, tag, attrs):
attrs = dict(attrs)
if 'href' in attrs:
href = attrs['href']

if re.match('^{0}|\/|\#[\S]{{1,}}'.format(root), href):
if self.save_match(re.search('.*\#(.*?)$', href)):
if self.origin not in self.anchors:
self.anchors[self.origin] = set()
self.anchors[self.origin].add(
self.save_match.match.groups(1)[0])

url = urljoin(root, href)

if url not in self.crawled and not re.match('^\#', href):
self.crawled.add(url)
Parser(url).feed(requests.get(url).content)

if 'id' in attrs:
self.ids.add(attrs['id'])
# explicit <a name=""></a> references
if 'name' in attrs:
self.ids.add(attrs['name'])


r = requests.get(root)
parser = Parser(root)
parser.feed(r.content)
for anchor in sorted(parser.anchors):
if not re.match('.*/\#.*', anchor):
for anchor_name in parser.anchors[anchor]:
if anchor_name not in parser.ids:
print 'Missing - ({0}): #{1}'.format(
anchor.replace(root, ''), anchor_name)
7 changes: 7 additions & 0 deletions docs/man/docker-build.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ docker-build - Build a new image from the source code at PATH
**docker build**
[**--force-rm**[=*false*]]
[**--no-cache**[=*false*]]
[**--pull**[=*false*]]
[**-q**|**--quiet**[=*false*]]
[**--rm**[=*true*]]
[**-t**|**--tag**[=*TAG*]]
Expand Down Expand Up @@ -36,6 +37,12 @@ as context.
**--no-cache**=*true*|*false*
Do not use cache when building the image. The default is *false*.

**--help**
Print usage statement

**--pull**=*true*|*false*
Always attempt to pull a newer version of the image. The default is *false*.

**-q**, **--quiet**=*true*|*false*
Suppress the verbose output generated by the containers. The default is *false*.

Expand Down
8 changes: 8 additions & 0 deletions docs/man/docker-events.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ docker-events - Get real time events from the server

# SYNOPSIS
**docker events**
[**--help**]
[**-f**|**--filter**[=*[]*]]
[**--since**[=*SINCE*]]
[**--until**[=*UNTIL*]]

Expand All @@ -23,6 +25,12 @@ and Docker images will report:
untag, delete

# OPTIONS
**--help**
Print usage statement

**-f**, **--filter**=[]
Provide filter values (i.e., 'event=stop')

**--since**=""
Show all events created since timestamp

Expand Down
2 changes: 1 addition & 1 deletion docs/man/docker-images.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ versions.
Show all images (by default filter out the intermediate image layers). The default is *false*.

**-f**, **--filter**=[]
Provide filter values (i.e. 'dangling=true')
Provide filter values (i.e., 'dangling=true')

**--no-trunc**=*true*|*false*
Don't truncate output. The default is *false*.
Expand Down
2 changes: 1 addition & 1 deletion docs/man/docker-logs.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ CONTAINER
# DESCRIPTION
The **docker logs** command batch-retrieves whatever logs are present for
a container at the time of execution. This does not guarantee execution
order when combined with a docker run (i.e. your run may not have generated
order when combined with a docker run (i.e., your run may not have generated
any logs at the time you execute docker logs).

The **docker logs --follow** command combines commands **docker logs** and
Expand Down
2 changes: 2 additions & 0 deletions docs/man/docker-search.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ of images returned displays the name, description (truncated by default),
number of stars awarded, whether the image is official, and whether it
is automated.

*Note* - Search queries will only return up to 25 results

# OPTIONS
**--automated**=*true*|*false*
Only show automated builds. The default is *false*.
Expand Down
2 changes: 1 addition & 1 deletion docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ use_absolute_urls: true
theme_dir: ./theme/mkdocs/
theme_center_lead: false

copyright: Copyright &copy; 2014, Docker, Inc.
copyright: Copyright &copy; 2014-2015, Docker, Inc.
google_analytics: ['UA-6096819-11', 'docker.io']

pages:
Expand Down
61 changes: 58 additions & 3 deletions docs/release.sh
Original file line number Diff line number Diff line change
Expand Up @@ -72,31 +72,84 @@ setup_s3() {

build_current_documentation() {
mkdocs build
cd site/
gzip -9k -f search_content.json
cd ..
}

upload_current_documentation() {
src=site/
dst=s3://$BUCKET$1

cache=max-age=3600
if [ "$NOCACHE" ]; then
cache=no-cache
fi

echo
echo "Uploading $src"
echo " to $dst"
echo
#s3cmd --recursive --follow-symlinks --preserve --acl-public sync "$src" "$dst"
#aws s3 cp --profile $BUCKET --cache-control "max-age=3600" --acl public-read "site/search_content.json" "$dst"

# a really complicated way to send only the files we want
# if there are too many in any one set, aws s3 sync seems to fall over with 2 files to go
# versions.html_fragment
include="--recursive --include \"*.$i\" "
echo "uploading *.$i"
run="aws s3 cp $src $dst $OPTIONS --profile $BUCKET --cache-control \"max-age=3600\" --acl public-read $include"
run="aws s3 cp $src $dst $OPTIONS --profile $BUCKET --cache-control $cache --acl public-read $include"
echo "======================="
echo "$run"
echo "======================="
$run

# Make sure the search_content.json.gz file has the right content-encoding
aws s3 cp --profile $BUCKET --cache-control $cache --content-encoding="gzip" --acl public-read "site/search_content.json.gz" "$dst"
}

invalidate_cache() {
if [ "" == "$DISTRIBUTION_ID" ]; then
echo "Skipping Cloudfront cache invalidation"
return
fi

dst=$1

#aws cloudfront create-invalidation --profile docs.docker.com --distribution-id $DISTRIBUTION_ID --invalidation-batch '{"Paths":{"Quantity":1, "Items":["'+$file+'"]},"CallerReference":"19dec2014sventest1"}'
aws configure set preview.cloudfront true

files=($(cat changed-files | grep 'sources/.*$' | sed -E 's#.*docs/sources##' | sed -E 's#index\.md#index.html#' | sed -E 's#\.md#/index.html#'))
files[${#files[@]}]="/index.html"
files[${#files[@]}]="/versions.html_fragment"

len=${#files[@]}

echo "aws cloudfront create-invalidation --profile $AWS_S3_BUCKET --distribution-id $DISTRIBUTION_ID --invalidation-batch '" > batchfile
echo "{\"Paths\":{\"Quantity\":$len," >> batchfile
echo "\"Items\": [" >> batchfile

#for file in $(cat changed-files | grep 'sources/.*$' | sed -E 's#.*docs/sources##' | sed -E 's#index\.md#index.html#' | sed -E 's#\.md#/index.html#')
for file in "${files[@]}"
do
if [ "$file" == "${files[${#files[@]}-1]}" ]; then
comma=""
else
comma=","
fi
echo "\"$dst$file\"$comma" >> batchfile
done

echo "]}, \"CallerReference\":" >> batchfile
echo "\"$(date)\"}'" >> batchfile


echo "-----"
cat batchfile
echo "-----"
sh batchfile
echo "-----"
}


if [ "$OPTIONS" != "--dryrun" ]; then
setup_s3
fi
Expand All @@ -106,10 +159,12 @@ if [ "$BUILD_ROOT" == "yes" ]; then
echo "Building root documentation"
build_current_documentation
upload_current_documentation
[ "$NOCACHE" ] || invalidate_cache
fi

#build again with /v1.0/ prefix
sed -i "s/^site_url:.*/site_url: \/$MAJOR_MINOR\//" mkdocs.yml
echo "Building the /$MAJOR_MINOR/ documentation"
build_current_documentation
upload_current_documentation "/$MAJOR_MINOR/"
[ "$NOCACHE" ] || invalidate_cache "/$MAJOR_MINOR"
2 changes: 1 addition & 1 deletion docs/sources/articles/baseimages.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ page_keywords: Examples, Usage, base image, docker, documentation, examples
# Create a Base Image

So you want to create your own [*Base Image*](
/terms/image/#base-image-def)? Great!
/terms/image/#base-image)? Great!

The specific process will depend heavily on the Linux distribution you
want to package. We have some examples below, and you are encouraged to
Expand Down
10 changes: 5 additions & 5 deletions docs/sources/articles/basics.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ If you get `docker: command not found` or something like
incomplete Docker installation or insufficient privileges to access
Docker on your machine.

Please refer to [*Installation*](/installation/#installation-list)
Please refer to [*Installation*](/installation)
for installation instructions.

## Download a pre-built image
Expand All @@ -26,7 +26,7 @@ for installation instructions.
$ sudo docker pull ubuntu

This will find the `ubuntu` image by name on
[*Docker Hub*](/userguide/dockerrepos/#find-public-images-on-docker-hub)
[*Docker Hub*](/userguide/dockerrepos/#searching-for-images)
and download it from [Docker Hub](https://hub.docker.com) to a local
image cache.

Expand All @@ -37,7 +37,7 @@ image cache.
> characters of the full image ID - which can be found using
> `docker inspect` or `docker images --no-trunc=true`
**If you're using OS X** then you shouldn't use `sudo`.
{{ include "no-remote-sudo.md" }}

## Running an interactive shell

Expand Down Expand Up @@ -174,6 +174,6 @@ will be stored (as a diff). See which images you already have using the
You now have an image state from which you can create new instances.

Read more about [*Share Images via
Repositories*](/userguide/dockerrepos/#working-with-the-repository) or
Repositories*](/userguide/dockerrepos) or
continue to the complete [*Command
Line*](/reference/commandline/cli/#cli)
Line*](/reference/commandline/cli)

0 comments on commit 359c74c

Please sign in to comment.