Packages and Binaries:

cri-tools

This package contains a series of debugging and validation tools for Kubelet CRI, which includes:

  • crictl: CLI for kubelet CRI.
  • critest: validation test suites for kubelet CRI.

Installed size: 73.94 MB
How to install: sudo apt install cri-tools

crictl
root@kali:~# crictl -h
NAME:
   crictl - client for CRI

USAGE:
   crictl [global options] command [command options] [arguments...]

COMMANDS:
   attach              Attach to a running container
   create              Create a new container
   exec                Run a command in a running container
   version             Display runtime version information
   images, image, img  List images
   inspect             Display the status of one or more containers
   inspecti            Return the status of one or more images
   imagefsinfo         Return image filesystem info
   inspectp            Display the status of one or more pods
   logs                Fetch the logs of a container
   port-forward        Forward local port to a pod
   ps                  List containers
   pull                Pull an image from a registry
   run                 Run a new container inside a sandbox
   runp                Run a new pod
   rm                  Remove one or more containers
   rmi                 Remove one or more images
   rmp                 Remove one or more pods
   pods                List pods
   start               Start one or more created containers
   info                Display information of the container runtime
   stop                Stop one or more running containers
   stopp               Stop one or more running pods
   update              Update one or more running containers
   config              Get and set crictl client configuration options
   stats               List container(s) resource usage statistics
   statsp              List pod statistics. Stats represent a structured API that will fulfill the Kubelet's /stats/summary endpoint.
   metricsp            List pod metrics. Metrics are unstructured key/value pairs gathered by CRI meant to replace cAdvisor's /metrics/cadvisor endpoint.
   completion          Output shell completion code
   checkpoint          Checkpoint one or more running containers
   runtime-config      Retrieve the container runtime configuration
   events, event       Stream the events of containers
   help, h             Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --config value, -c value            Location of the client config file. If not specified and the default does not exist, the program's directory is searched as well (default: "/etc/crictl.yaml") [$CRI_CONFIG_FILE]
   --debug, -D                         Enable debug mode (default: false)
   --image-endpoint value, -i value    Endpoint of CRI image manager service (default: uses 'runtime-endpoint' setting) [$IMAGE_SERVICE_ENDPOINT]
   --runtime-endpoint value, -r value  Endpoint of CRI container runtime service (default: uses in order the first successful one of [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]). Default is now deprecated and the endpoint should be set instead. [$CONTAINER_RUNTIME_ENDPOINT]
   --timeout value, -t value           Timeout of connecting to the server in seconds (e.g. 2s, 20s.). 0 or less is set to default (default: 2s)
   --help, -h                          show help

critest
root@kali:~# critest -h
Controlling Test Order
  --ginkgo.seed [int] (default: randomly generated by Ginkgo)
    The seed used to randomize the spec suite.
  --ginkgo.randomize-all 
    If set, ginkgo will randomize all specs together.  By default, ginkgo only
    randomizes the top level Describe, Context and When containers.

Controlling Test Parallelism
These are set by the Ginkgo CLI, do not set them manually via go test.
Use ginkgo -p or ginkgo -procs=N instead.
  --ginkgo.parallel.process [int] (default: 1)
    This worker process's (one-indexed) process number.  For running specs in
    parallel.
  --ginkgo.parallel.total [int] (default: 1)
    The total number of worker processes.  For running specs in parallel.
  --ginkgo.parallel.host [string] (default: set by Ginkgo CLI)
    The address for the server that will synchronize the processes.

Filtering Tests
  --ginkgo.label-filter [expression] 
    If set, ginkgo will only run specs with labels that match the label-filter. 
    The passed-in expression can include boolean operations (!, &&, ||, ','),
    groupings via '()', and regular expressions '/regexp/'.  e.g. '(cat || dog)
    && !fruit'
  --ginkgo.focus [string] 
    If set, ginkgo will only run specs that match this regular expression. Can
    be specified multiple times, values are ORed.
  --ginkgo.skip [string] 
    If set, ginkgo will only run specs that do not match this regular
    expression. Can be specified multiple times, values are ORed.
  --ginkgo.focus-file [file (regexp) | file:line | file:lineA-lineB | file:line,line,line] 
    If set, ginkgo will only run specs in matching files. Can be specified
    multiple times, values are ORed.
  --ginkgo.skip-file [file (regexp) | file:line | file:lineA-lineB | file:line,line,line] 
    If set, ginkgo will skip specs in matching files. Can be specified multiple
    times, values are ORed.

Failure Handling
  --ginkgo.fail-on-pending 
    If set, ginkgo will mark the test suite as failed if any specs are pending.
  --ginkgo.fail-fast 
    If set, ginkgo will stop running a test suite after a failure occurs.
  --ginkgo.flake-attempts [int] (default: 0 - failed tests are not retried)
    Make up to this many attempts to run each spec. If any of the attempts
    succeed, the suite will not be failed.

Controlling Output Formatting
  --ginkgo.no-color 
    If set, suppress color output in default reporter.
  --ginkgo.v 
    If set, emits more output including GinkgoWriter contents.
  --ginkgo.vv 
    If set, emits with maximal verbosity - includes skipped and pending tests.
  --ginkgo.succinct 
    If set, default reporter prints out a very succinct report
  --ginkgo.trace 
    If set, default reporter prints out the full stack trace when a failure
    occurs
  --ginkgo.show-node-events 
    If set, default reporter prints node > Enter and < Exit events when specs
    fail
  --ginkgo.json-report [filename.json] 
    If set, Ginkgo will generate a JSON-formatted test report at the specified
    location.
  --ginkgo.junit-report [filename.xml] 
    If set, Ginkgo will generate a conformant junit test report in the specified
    file.
  --ginkgo.teamcity-report [filename] 
    If set, Ginkgo will generate a Teamcity-formatted test report at the
    specified location.

Debugging Tests
In addition to these flags, Ginkgo supports a few debugging environment
variables.  To change the parallel server protocol set GINKGO_PARALLEL_PROTOCOL
to HTTP.  To avoid pruning callstacks set GINKGO_PRUNE_STACK to FALSE.
  --ginkgo.dry-run 
    If set, ginkgo will walk the test hierarchy without actually running
    anything.  Best paired with -v.
  --ginkgo.poll-progress-after [duration] (default: 0)
    Emit node progress reports periodically if node hasn't completed after this
    duration.
  --ginkgo.poll-progress-interval [duration] (default: 10s)
    The rate at which to emit node progress reports after poll-progress-after
    has elapsed.
  --ginkgo.source-root [string] 
    The location to look for source code when generating progress reports.  You
    can pass multiple --source-root flags.
  --ginkgo.timeout [duration] (default: 1h)
    Test suite fails if it does not complete within the specified timeout.
  --ginkgo.grace-period [duration] (default: 30s)
    When interrupted, Ginkgo will wait for GracePeriod for the current running
    node to exit before moving on to the next one.
  --ginkgo.output-interceptor-mode [dup, swap, or none] 
    If set, ginkgo will use the specified output interception strategy when
    running in parallel.  Defaults to dup on unix and swap on windows.

Go test flags
  -benchmark
    	Run benchmarks instead of validation tests
  -benchmarking-output-dir string
    	Optional path to a directory in which benchmarking data should be placed.
  -benchmarking-params-file string
    	Optional path to a YAML file specifying benchmarking configuration options.
  -config string
    	Location of the client config file. If not specified and the default does not exist, the program's directory is searched as well
  -image-endpoint string
    	Image service socket for client to connect.
  -image-service-timeout duration
    	Timeout when trying to connect to image service.
  -parallel int
    	The number of parallel test nodes to run (default 1)
  -registry-prefix string
    	A possible registry prefix added to all images, like 'localhost:5000'
  -report-dir string
    	Path to the directory where the JUnit XML reports should be saved. Default is empty, which doesn't generate these reports.
  -report-prefix string
    	Optional prefix for JUnit XML reports. Default is empty, which doesn't prepend anything to the default name.
  -runtime-endpoint string
    	Runtime service socket for client to connect.
  -runtime-handler string
    	Runtime handler to use in the test.
  -runtime-service-timeout duration
    	Timeout when trying to connect to a runtime service.
  -test-images-file string
    	Optional path to a YAML file containing references to custom container images to be used in tests.
  -test.bench regexp
    	run only benchmarks matching regexp
  -test.benchmem
    	print memory allocations for benchmarks
  -test.benchtime d
    	run each benchmark for duration d
  -test.blockprofile file
    	write a goroutine blocking profile to file
  -test.blockprofilerate rate
    	set blocking profile rate (see runtime.SetBlockProfileRate)
  -test.count n
    	run tests and benchmarks n times
  -test.coverprofile file
    	write a coverage profile to file
  -test.cpu list
    	comma-separated list of cpu counts to run each test with
  -test.cpuprofile file
    	write a cpu profile to file
  -test.failfast
    	do not start new tests after the first test failure
  -test.fullpath
    	show full file names in error messages
  -test.fuzz regexp
    	run the fuzz test matching regexp
  -test.fuzzcachedir string
    	directory where interesting fuzzing inputs are stored (for use only by cmd/go)
  -test.fuzzminimizetime value
    	time to spend minimizing a value after finding a failing input
  -test.fuzztime value
    	time to spend fuzzing; default is to run indefinitely
  -test.fuzzworker
    	coordinate with the parent process to fuzz random values (for use only by cmd/go)
  -test.gocoverdir string
    	write coverage intermediate files to this directory
  -test.list regexp
    	list tests, examples, and benchmarks matching regexp then exit
  -test.memprofile file
    	write an allocation profile to file
  -test.memprofilerate rate
    	set memory allocation profiling rate (see runtime.MemProfileRate)
  -test.mutexprofile string
    	write a mutex contention profile to the named file after execution
  -test.mutexprofilefraction int
    	if >= 0, calls runtime.SetMutexProfileFraction()
  -test.outputdir dir
    	write profiles to dir
  -test.paniconexit0
    	panic on call to os.Exit(0)
  -test.parallel n
    	run at most n tests in parallel
  -test.run regexp
    	run only tests and examples matching regexp
  -test.short
    	run smaller test suite to save time
  -test.shuffle string
    	randomize the execution order of tests and benchmarks
  -test.skip regexp
    	do not list or run tests matching regexp
  -test.testlogfile file
    	write test action log to file (for use only by cmd/go)
  -test.timeout d
    	panic test binary after duration d (default 0, timeout disabled)
  -test.trace file
    	write an execution trace to file
  -test.v
    	verbose: print additional output
  -version
    	Display version of critest


Updated on: 2024-Feb-16