This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

CLI Tools

1 - checkconfig

checkconfig loads the Prow configuration given with --config-path, --job-config-path and --plugin-config in order to validate it. Use checkconfig as a pre-submit for any repository holding Prow configuration to ensure that check-ins do not break anything.

2 - config-bootstrapper

config-bootstrapper is used to bootstrap a configuration that would be incrementally updated by the config-updater Prow plugin.

When a set of configurations do not exist (for example, on a clean redeployment or in a disaster recovery situation), the config-updater plugin is not useful as it can only upload incremental updates. This tool is meant to be used in those situations to set up the config to the correct base state and hand off ownership to the plugin for updates.

Provide the config-bootstrapper with the latest state of the Prow configuration (plugins.yaml, config.yaml, any job configuration files) to boot-strap with the latest configuration.

Sample usage:

./config-bootstrapper \
    --dry-run=false \
    --source-path=.  \
    --config-path=prowconfig/config.yaml \
    --plugin-config=prowconfig/plugins.yaml \
    --job-config-path=prowconfig/jobs

3 - generic-autobumper

This tool automates the version upgrading of images such as the prow.k8s.io Prow deployment. Its workflow is:

  • Given a local git repo containing the manifests of Prow component deployment, e.g., /config/prow/cluster folder in this repo.
  • Find out the most recent tags of given prefixes in gcr.io registry and modify the yaml files with them.
  • git-commit the change, push it to the remote repo, and create/update a PR, e.g., test-infra/pull/14249, for the change.

The cluster admins can upgrade the version of images by approving the PR.

Define Prow jobs to utilize this tool:

  • Periodic job for the above workflow: Periodically generate PRs for bumping the version, e.g., ci-test-infra-autobump-prow.
  • Postsubmit job for auto-deployment: In order to make the changes effective in Prow-cluster, a postsubmit job, e.g., post-test-infra-deploy-prow for prow.k8s.io is defined for deploying the yaml files.

Requirement

We need to fulfil those requirements to use this tool:

  • a “committable” local repo, i.e., git-commit command can be executed successfully, e.g., git-config is set up correctly. This can be achieved by clone the repo by extra_refs, e.g.,

      extra_refs:
      - org: kubernetes
        repo: test-infra
        base_ref: master
    
  • a GitHub token which has permissions to be used by this tool to push changes and create PRs against the remote repo.

  • a yaml config file that specifies the follwing information passed in with the flag -config=FILEPATH:

  • For info about what should go in the config look at the documentation for the Options here and look at the example below.

e.g.,

gitHubLogin: "k8s-ci-robot"
gitHubToken: "/etc/github-token/oauth"
gitName: "Kubernetes Prow Robot"
gitEmail: "k8s.ci.robot@gmail.com"
onCallAddress: "https://storage.googleapis.com/kubernetes-jenkins/oncall.json"
skipPullRequest: false
gitHubOrg: "kubernetes"
gitHubRepo: "test-infra"
remoteName: "test-infra"
upstreamURLBase: "https://raw.githubusercontent.com/kubernetes/test-infra/master"
includedConfigPaths:
  - "."
excludedConfigPaths:
  - "config/prow-staging"
extraFiles:
  - "config/jobs/kubernetes/kops/build-grid.py"
  - "config/jobs/kubernetes/kops/build-pipeline.py"
  - "releng/generate_tests.py"
  - "images/kubekins-e2e/Dockerfile"
targetVersion: "latest"
prefixes:
  - name: "Prow"
    prefix: "gcr.io/k8s-prow/"
    refConfigFile: "config/prow/cluster/deck_deployment.yaml"
    stagingRefConfigFile: "config/prow-staging/cluster/deck_deployment.yaml"
    repo: "https://github.com/kubernetes/test-infra"
    summarise: true
    consistentImages: true
  - name: "Boskos"
    prefix: "gcr.io/k8s-staging-boskos/"
    refConfigFile: "config/prow/cluster/build/boskos.yaml"
    stagingRefConfigFile: "config/prow-staging/cluster/boskos.yaml"
    repo: "https://github.com/kubernetes-sigs/boskos"
    summarise: false
    consistentImages: true
  - name: "Prow-Test-Images"
    prefix: "gcr.io/k8s-testimages/"
    repo: "https://github.com/kubernetes/test-infra"
    summarise: false
    consistentImages: false

4 - invitations-accepter

The invitations-accepter tool approves all pending repository invitations.

Usage

example:

invitations-accepter --dry-run=false --github-token-path=/etc/github/oauth

using with GitHub Apps

invitations-accepter --dry-run=false --github-app-id=12345 --github-app-private-key-path=/etc/github/cert

5 - mkpj

This is a placeholder page. Some contents needs to be filled.

6 - mkpod

This is a placeholder page. Some contents needs to be filled.

7 - Peribolos

Peribolos allows the org settings, teams and memberships to be declared in a yaml file. GitHub is then updated to match the declared configuration.

See the kubernetes/org repo, in particular the merge and update.sh parts of that repo for this tool in action.

Peribolos was the subject of a KubeCon talk: How Kubernetes Uses GitOps to Manage GitHub Communities at Scale

Etymology

A peribolos is a wall that encloses a court in Greek/Roman architecture.

Org configuration

Extend the primary prow config.yaml document to include a top-level orgs key that looks like the following:

orgs:
  this-org:
    # org settings
    company: foo
    email: foo
    name: foo
    description: foo
    has_organization_projects: true
    has_repository_projects: true
    default_repository_permission: read
    members_can_create_repositories: false

    # org member settings
    members:
    - anne
    - bob
    admins:
    - carl

    # team settings
    teams:
      node:
        # team config
        description: people working on node backend
        privacy: closed
        previously:
        - backend  # If a backend team exists, rename it to node

        # team members
        members:
        - anne
        maintainers:
        - jane
        repos: # Ensure the team has the following permissions levels on repos in the org
          some-repo: admin
          other-repo: read
      another-team:
        ...
      ...
  that-org:
    ...

This config will:

  • Ensure the org settings match the following:
    • Set the company, email, name and descriptions fields for the org to foo
    • Allow projects to be created at the org and repo levels
    • Give everyone read access to repos by default
    • Disallow members from creating repositories
  • Ensure the following memberships exist:
    • anne and bob are members, carl is an admin
  • Configure the node and another-team in the following manner:
    • Set node’s description and privacy setting.
    • Rename the backend team to node
    • Add anne as a member and jane as a maintainer to node
    • Similar things for another-team (details elided)
  • Ensure that the team has admin rights to some-repo, read access to other-repo and no other privileges

Note that any fields missing from the config will not be managed by peribolos. So if description is missing from the org setting, the current value will remain.

For more details please see GitHub documentation around edit org, update org membership, edit team, update team membership.

Initial seed

Peribolos can dump the current configuration to an org. For example you could dump the kubernetes org do the following:

$ go run ./prow/cmd/peribolos --dump kubernetes-sigs --github-token-path ~/github-token | tee ~/current.yaml
...
INFO: Build completed successfully, 1 total action
...
{"client":"github","component":"peribolos","level":"info","msg":"GetOrg(kubernetes-sigs)","time":"2018-09-28T13:17:42-07:00"}
{"client":"github","component":"peribolos","level":"info","msg":"ListOrgMembers(kubernetes-sigs, admin)","time":"2018-09-28T13:17:42-07:00"}
{"client":"github","component":"peribolos","level":"info","msg":"ListOrgMembers(kubernetes-sigs, member)","time":"2018-09-28T13:17:43-07:00"}
{"client":"github","component":"peribolos","level":"info","msg":"ListTeams(kubernetes-sigs)","time":"2018-09-28T13:17:45-07:00"}
{"client":"github","component":"peribolos","level":"info","msg":"ListTeamMembers(2671356, maintainer)","time":"2018-09-28T13:17:46-07:00"}
{"client":"github","component":"peribolos","level":"info","msg":"ListTeamMembers(2671356, member)","time":"2018-09-28T13:17:46-07:00"}
...
admins:
- calebamiles
- cblecker
- etc
billing_email: secret@example.com
company: ""
default_repository_permission: read
description: Org for Kubernetes SIG-related work
email: ""
has_organization_projects: true
has_repository_projects: true
location: ""
members:
- ameukam
- amwat
- ant31
- etc
teams:
  application-admins:
    description: admin access to application
    maintainers:
    - kow3ns
    members:
    - mattfarina
    - prydonius
    privacy: closed
  architecture-tracking-admins:
    description: admin permission for architecture-tracking
    maintainers:
    - jdumars
    - bgrant0607
    privacy: closed
  # etc

Open ~/current.yaml and then delete any metadata you don’t want peribolos to manage (such as billing_email, or all the teams, etc).

Apply this config in dry-run mode to see what would happen (hopefully nothing since you just created it):

$ go run ./prow/cmd/peribolos --config-path ~/current.yaml --github-token-path ~/github-token # --confirm

{"client":"github","component":"peribolos","level":"info","msg":"GetOrg(kubernetes-sigs)","time":"2018-09-27T23:07:13Z"}
{"client":"github","component":"peribolos","level":"info","msg":"ListOrgInvitations(kubernetes-sigs)","time":"2018-09-27T23:07:13Z"}
{"client":"github","component":"peribolos","level":"info","msg":"ListOrgMembers(kubernetes-sigs, admin)","time":"2018-09-27T23:07:13Z"}
{"client":"github","component":"peribolos","level":"info","msg":"ListOrgMembers(kubernetes-sigs, member)","time":"2018-09-27T23:07:14Z"}
...

Settings

In order to mitigate the chance of applying erroneous configs, the peribolos binary includes a few safety checks:

  • --required-admins= - a list of people who must be configured as admins in order to accept the config (defaults to empty list)
  • --min-admins=5 - the config must specify at least this many admins
  • --require-self=true - require the bot applying the config to be an admin.

These flags are designed to ensure that any problems can be corrected by rerunning the tool with a fixed config and/or binary.

  • --maximum-removal-delta=0.25 - reject a config that deletes more than 25% of the current memberships.

This flag is designed to protect against typos in the configuration which might cause massive, unwanted deletions. Raising this value to 1.0 will allow deleting everyone, and reducing it to 0.0 will prevent any deletions.

  • --confirm=false - no github mutations will be made until this flag is true. It is safe to run the binary without this flag. It will print what it would do, without actually making any changes.

See go run ./prow/cmd/peribolos --help for the full and current list of settings that can be configured with flags.

8 - Phaino

Run prowjobs on your local workstation with phaino.

Plato believed that ideas and forms are the ultimate truth, whereas we only see the imperfect physical appearances of those idea.

He linkens this in his Allegory of the Cave to someone living in a cave who can only see the shadows projected on the wall from objects passing in front of a fire.

Phaino is act of making those imperfect shadows appear.

Phaino shares a prefix with Pharos, meaning lighthouse and in particular the ancient one in Alexandria.

Usage

Usage:

# Use a job from deck
go run ./prow/cmd/phaino $URL # or /path/to/prowjob.yaml
# Use mkpj to create the job
go run ./prow/cmd/mkpj --config-path=/path/to/prow/config.yaml --job-config-path=/path/to/prow/job/configs --job=foo > /tmp/foo
go run ./prow/cmd/phaino /tmp/foo

Phaino is an interactive utility; it will prompt you for a local copy of any secrets or volumes that the Prow Job may require.

Common options

  • --grace=5m controls how long to wait for interrupted jobs before terminating
  • --print the command that runs each job without running it
  • --privileged jobs are allowed to run instead of rejected
  • --timeout=10m controls how long to allow jobs to run before interrupting them
  • --code-mount-path=/go changes the path where code is mounted in the container
  • --skip-volume-mounts=volume1,volume2 includes the unwanted volume mounts that are defined in the job spec
  • --extra-volume-mounts=/go/src/sigs.k8s.io/prow=/Users/xyz/k8s-test-infra includes the extra volume mounts needed for the container. Key is the mount path and value is the local path
  • --skip-envs=env1,env2 includes the unwanted env vars that are defined in the job spec
  • --extra-envs=env1=val1,env2=val2 includes the extra env vars needed for the container
  • --use-local-gcloud-credentials controls whether to use the same gcloud credentials as local or not
  • --use-local-kubeconfig controls whether to use the same kubeconfig as local or not

Common options usage scenarios

Phaino is smart at prompting for where repo is located, volume mounts etc., if it’s desired to save the prompts, use the following tricks instead:

  • If the repo needs to be cloned under GOPATH, use:

    --code-mount-path==/whatever/go/src # Controls where source code is mounted in container
    --extra-volume-mounts=/whatever/go/src/sigs.k8s.io/prow=/Users/xyz/k8s-test-infra
    
  • If job requires mounting kubeconfig, assume the mount is named kubeconfig,use:

    --use-local-kubeconfig
    --skip-volume-mounts=kubeconfig
    
  • If job requires mounting gcloud default credentials, assume the mount is named service-account,use:

    --use-local-gcloud-credentials
    --skip-volume-mounts=service-account
    
  • If job requires mounting something else like name:foo; mountPath: /bar,use:

    --extra-volume-mounts=/bar=/Users/xyz/local/bar
    --skip-volume-mounts=foo
    
  • If job requires env vars,use:

    --extra-envs=env1=val1,env2=val2
    

See go run ./prow/cmd/phaino --help for full option list.

Usage examples

URL example

  • Go to your deck deployment
  • Pick a job and click the rerun icon on the left
  • Copy the URL (something like https://prow.k8s.io/rerun?prowjob=d08f1ca5-5d63-11e9-ab62-0a580a6c1281)
  • Paste it as a phaino arg
    • go run ./prow/cmd/phaino https://prow.k8s.io/rerun?prowjob=d08f1ca5-5d63-11e9-ab62-0a580a6c1281
    • Alternatively go run ./prow/cmd/phaino <(curl $URL)

Configuration example

  • Use mkpj to create the job and pipe this to phaino
    • For prow.k8s.io jobs use //config:mkpj

      go run ./config:mkpj --job=pull-test-infra-bazel > /tmp/foo
      go run ./prow/cmd/phaino /tmp/foo
      
    • Other deployments will need to clone that rule and/or pass in extra flags:

      go run ./prow/cmd/mkpj --config-path=/my/config.yaml --job=my-job
      go run ./prow/cmd/phaino /tmp/foo
      

9 - Phony

phony sends fake GitHub webhooks.

Running a GitHub event manager

phony is most commonly used for testing hook and its plugins, but can be used for testing any externally exposed service configured to receive GitHub events (external plugins).

To get an idea of phony’s behavior, start a local instance of hook with this:

go run prow/cmd/hook/main.go \
 --config-path=config/prow/config.yaml \
 --plugin-config=config/prow/plugins.yaml \
 --hmac-secret-file=path/to/hmac \
 --github-token-path=path/to/github-token

# Note:
# --hmac-secret-file is required for running locally, use the same hmac token for phony below

Usage

Once you have a running server that manages github webhook events, generate an hmac token (same process as in prow), and point a phony pull request event at it with the following:

phony --help
Usage of ./phony:
  -address string
     Where to send the fake hook. (default "http://localhost:8888/hook")
  -event string
     Type of event to send, such as pull_request. (default "ping")
  -hmac string
     HMAC token to sign payload with. (default "abcde12345")
  -payload string
     File to send as payload. If unspecified, sends "{}".

If you are testing hook and successfully sent the webhook from phony, you should see a log from hook resembling the following:

{"author":"","component":"hook","event-GUID":"GUID","event-type":"pull_request","level":"info","msg":"Pull request .","org":"","pr":0,"repo":"","time":"2018-05-29T11:38:57-07:00","url":""}

A list of supported events can be found in the GitHub API Docs. Some example event payloads can be found in the examples directory.

10 - tackle

Prow’s tackle utility walks you through deploying a new instance of prow in a couple of minutes, try it out!

Installing tackle

Tackle at this point in time needs to be built from source. The following steps will walk you through the process:

  1. Clone the test-infra repository:
git clone git@github.com:kubernetes/test-infra.git
  1. Build tackle (This requires a working go installation on your system)
cd test-infra/prow/cmd/tackle && go build -o tackle
  1. Optionally move tackle to your $PATH
sudo mv tackle /usr/sbin/tackle

Deploying prow

Note: Creating a cluster using the tackle utility assumes you have the gcloud application in your $PATH and are logged in. If you are doing this on another cloud skip to the Manual deployment below.

Installing Prow using tackle will help you through the following steps:

  • Choosing a kubectl context (or creating a cluster on GCP / getting its credentials if necessary)
  • Deploying prow into that cluster
  • Configuring GitHub to send prow webhooks for your repos. This is where you’ll provide the absolute /path/to/github/token

To install prow run the following and follow the on-screen instructions:

  1. Run tackle:
tackle
  1. Once your cluster is created, you’ll get a prompt to apply a starter.yaml. Before you do that open another terminal and apply the prow CRDs using:
kubectl apply --server-side=true -f https://raw.githubusercontent.com/kubernetes/test-infra/master/config/prow/cluster/prowjob-crd/prowjob_customresourcedefinition.yaml
  1. After that specify the starter.yaml you want to use (please make sure to replace the values mentioned here). Once that is done some pods still won’t be in the Running state because we haven’t created the secret containing the credentials needed for our GCS bucket. To do that follow the steps in Configure a GCS bucket.

  2. Once that is done, tackle should show you the URL where you can access the prow dashboard. To use it with your repositories head over to the settings of the GitHub app you created and there under webhook secret, supply the HMAC token you specified in the starter.yaml.

  3. Once that is done, install the GitHub app on the repositories you want (this is only needed if you ran tackle with the --skip-github flag) and you should now be able to use Prow :)

See the Next Steps section after running this utility.