This page is about the Cloudrexx CLI script (
./cx). It focuses on technical "nice to know"s.
- 1 Structure
- 2 Internal Commands
- 3 Docker setup
- 4 Enhancement ideas / Known problems
- 5 References
The script is divided into three parts: Windows, Unix and PHP:
Bash 3 / Bash 4
The script requires Bash version 4. In order for the script to run on Bash 3 (which is default on MacOS) we route all calls through a wrapping container (docker image
In order to allow the
cx script inside the wrapping Ubuntu container to control Docker on the host system the Docker socket is mounted to that container. This works nicely and allows to control Docker from inside the container. However, Docker Compose has an issue when mounting the current directory using ".": If run from inside the container, it tries to mount the container's current directory.
In order to circumvent that, the script passes the argument
--proxy-host-dir=<cd> to all calls of
cx. This way the script running in the container knows the directory on the host and can change the
The same can be achieved by setting the environment variable
PROXY_HOST_DIR which is used for when calling
./cx env shell --wrapper.
Windows / Unix
Using the same script for Unix and Windows is achieved by using the following scheme:
@GOTO WIN \ 2>/dev/null # Unix part exit :WIN REM Batch script
The Windows part (batch script) is a simplistic wrapper to call the Unix part in a Docker container running Ubuntu. It checks if a file named
cx is available in the current directory. If not, it instructs the Ubuntu container to download it. After that it calls
./cx inside the Ubuntu container with all arguments you passed to it on Windows.
The wrapping mechanism is the same as for Bash version 3.
Bash / PHP
The Unix part contains some internal commands. For all other calls the Unix part is a simple wrapper to call
php index.php in the correct Docker container. If the container is not available, it tries to use the PHP executable on the host (if any). For the PHP part see Command Mode.
This command manages an environment. There are two different types of environments: vhost and standalone.
In standalone environments the web-container's port on which it accepts connections is directly bound to the hosts port. Therefore there can only be one standalone environment per not-yet-occupied port on your system.
Vhost environments are all on the same port. In order for this to work, there needs to be a proxy container running. For more info see envs command
Quick subcommand description
- The subcommand init performs the following (in this order):
- Checks out the repositories (and sets "assume-unchanged" for tmp/* and config/*)
- Internally calls env config to configure the environment
- Sets the "CONTREXX_INSTALLED" constant to "true"
- Internally calls env up to start the docker containers
- Internally calls env update --db --drop-users to initialize the database
- Auto-configures caching for versions 5 and newer
- The subcommand config allows to automatically or manually configure the environment as well as displaying the current configuration. The configuration is saved in config/settings.php, config/configuration.php and docker-compose.yml. _meta/docker-compose.tpl is used as a template for docker-compose.yml
- The subcommand up calls "docker-compose --project-name <name> up -d" and waits for the database server to be ready. <name> is set to your hostname without the dots, prefixed by "clxenv".
- The subcommand down calls "docker-compose --project-name <name> down". The "-v" flag is added to that call if "--purge" is specified.
- The subcommand restart internally calls "down" and "up"
- The subcommand update updates both repositories and reloads the database if necessary. You can use --db to only do a database reload
- The subcommand status shows the status of the docker containers associated with the current environment by calling "docker-compose --project-name <name> ps"
- The subcommand info internally calls "status" and "config --show"
- The subcommand shell opens an interactive bash shell to the web container
Automatically, the following presets are set:
|Contrexx/Cloudrexx version||PHP version||Database image||PHP image||Cache||Other|
|Cloudrexx 5 and newer||7.4||mariadb:10.5||cloudrexx/web:PHP<php-version>-with-mysql||All caches active|
|Contrexx 3-4||5.6||mariadb:5||Inactive||Ports other than 80 are not supported|
- The port for phpMyAdmin cannot be changed. Therefore the initialization of a second (non-vhost-) environment (even if on another port) fails.
- On Windows you might encounter some permission problems. This is a known limitation to Docker on Windows 
See Missing things / Enhancement ideas for a complete list.
This command manages Docker containers which are not specific to one environment. Currently this manages an NGINX proxy with docker-gen  which allows virtual hosting with no manual configuration. The proxy could be extended with automatically generated SSL certificates .
Short subcommand description
- up: Starts the proxy
- down: Stops the proxy
- restart: Internally calls "down" and "up"
- status: Calls "docker ps" for the proxy container
- info: Calls "docker ps" with a filter to show all containers which are part of an environment
- shell: Opens an interactive shell on the proxy container
- debug: Shows logs from the NGINX proxy (wrapper for "docker logs -f")
- list: Show a list of all environments that are up
- find: Shows the working directory of an environment
Make one ENV accessible from another computer
As the *.lvh.me domains always point to localhost, another computer won't be able to access your ENVs using such a domain. So in order to allow access from another computer you need to find some domain (or IP. For simplicity we simply say "domain" from now on, even if it might be an IP address.) that points the other computer to yours. Possible values are your computer's hostname or IP address. Depending on your network there might be more possibilities.
Once you have such a domain, do the following in the main directory of the ENV you want to make accessible:
./cx env down # edit docker-compose.yml in your favorite editor # look for the environment variable "VIRTUAL_HOST" in the first service ("web"). # add a comma (",", without a space) and your domain after the current value of "VIRTUAL_HOST" # save (and close) the file ./cx env up
And there you go, your environment is now available on both of these domains. You can add as many domains as you want for each ENV (just don't use the same for multiple ENVs).
Note: you may need to disable Cloudrexx' domain enforcement in order to allow the other computer to navigate the site.
By default this shows the last fatal error of the Cloudrexx installation (using the debug log file
--request only the last request's log (from
/tmp/log/dbg.log) is shown (piped to less).
--follow opens the log stream (of
--web shows the log of the web server (Apache) (continuously).
This command shows the help of all commands that are internally available. In addition it tries to call the help page via PHP and add the help from there to itself. This can easily be seen when running this command twice: once with the environment up and once down.
The Docker images used by
cx are hosted on Docker Hub.
The following list shows the PHP extensions available in the Docker images provided by Cloudrexx. The images are based on the official PHP images (if not stated otherwise). PHP extensions are listed by
docker run --rm cloudrexx/web:PHP<php_version> php -r 'echo implode("\n", get_loaded_extensions()) . PHP_EOL;'
|Extension||PHP 5.3 ||PHP 5.6||PHP 7.0||PHP 7.1||PHP 7.2||PHP 7.3||PHP 7.4||PHP 8.0|
Use the following command to activate the PHP Xdebug extension in an environment. Please note that this is not a persistent change and will be dropped when you restart your environment.
./cx env exec --root "pecl install xdebug && docker-php-ext-enable xdebug && apachectl -k graceful"
If you need composer in your container you can install it with the following one-liner:
./cx env exec --root "curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer"
After that you can call composer like this:
./cx env exec "composer --version"
NodeJS / NPM
If you need NodeJS/NPM in your container you can install it with the following one-liner:
./cx env exec --root "curl -sL https://deb.nodesource.com/setup_14.x | bash - && apt-get install -y nodejs"
After that you can call npm like this:
./cx env exec "npm -v"
IMPORTANT: Due to security restrictions, strace can't be used by default in a Docker container. The following guide let's you disable certain security features of Docker to make strace working. You must never apply the following configuration on a production environment as it makes the container vulnerable to attacks.
To allow debugging a process using strace the web container has to be configured as follows:
First, ensure the container is not running:
./cx env down
Then, add the following options to docker-compose.yml:
security_opt: - seccomp:unconfined privileged: true
Example of modified docker-compose.yml file:
services: web: image: "cloudrexx/web:PHP5.6-with-mysql" hostname: "example.lvh.me" ports: - "80:80" volumes: - .:/var/www/html depends_on: - db - usercache networks: - front-end - back-end security_opt: - seccomp:unconfined privileged: true
Start up the environment:
./cx env up
Install strace in the web container
./cx env exec --root "apt update && apt install strace"
Finally, strace is available for usage:
./cx env exec --root "strace -p <PID>"
HTTPS support is provided by the proxy container , but is not yet fully automated. In order to activate HTTPS support do the following:
- Create a directory for the certificates (i.e.: ~/ssl/).
- For each vhost you want to have HTTPS support, do add a private and public certificate key file. Save them in your certificate directory as <hostname>.key and <hostname>.crt.
- For self-signed certificates that are accepted by the browser, see https://alexanderzeitler.com/articles/Fixing-Chrome-missing_subjectAltName-selfsigned-cert-openssl/
- Note that you'll have to import the generated certificate into your browser to have it accepted.
- Shut envs and all env down (if running):
./cx envs down
- Execute the following command:
./cx envs up --certs-dir=<certificate_directory>
- Start one or more env:
./cx env up
Connect with MySQL Workbench
In order to connect to the database instance directly (for example to use reverse engineering with MySQL Workbench) follow these steps:
- Shut your environment down
cx env down
- Edit the file docker-compose.yml in the root of your installation
- In the section of the "db" container add the following lines
ports: - 3306:3306
- Save the file and start the environment
- The database should now be reachable on localhost on the default port.
- Please note that only one env can be configured to do so at a time. Otherwise the database server will not start. You may choose different ports to expose different database servers at the same time.
- The file docker-compose.yml will get overwritten on If you want to do this in a persistent way you may add it to _meta/docker-compose.tpl or fork the scripts repository.
cx env config
Connect multiple ENV to each other
ENV can talk to each other using their web container's full name. This allows direct communication from one instance to another. To get the full name of the web container call the following in the root directory of your ENV (assuming you use ENVS):
./cx envs list --dir=$PWD --real
Then you can do the following in another ENV on the same machine:
./cx env shell curl "http://<name_of_first_env>"
Enhancement ideas / Known problems
- ENHANCEMENT: Add support for
- ENHANCEMENT: Add autocompletion
- ENHANCEMENT: Add use-case for installing from package using the installer
- PROBLEM: Cache autoconfig support for pre 5 is missing
- PROBLEM: There's no way to access the MySQL server directly (which is a problem for the workflow when using MySQL workbench)
- PROBLEM: If the directory used as web-root is owned by root, permissions do not work properly. This is because the user IDs cannot be mapped correctly.
- PROBLEM: In case the name of the directory where the environment is being initialized does only contain numbers, then the script doesn't work as intented as it does wrongly interprete the hostname as IP address.
cxdoes automatically read all data from standard input (in case there is any). This breaks the ability to call
cxfrom within a
while-loop reading from standard input as in
while read input; do cx <command>; done;
- Workaround: call
cxwith input redirection. I.e.:
cx <command> < /dev/null
- Proposed solution: do implement an argument (i.e.:
-n) that prevents
cxfrom reading from standard input
- Workaround: call
./cx env init should force domain URL and protocol
./cx env down should be able to shut the environment down even if the configuration has changed since "up"
./cx env update --db should work for non-dockerized databases
./cx env up should show URLs
./cx env init: GIT checkout output very long (branch list). This is very annoying on Windows, since output is very slow there
- ENHANCEMENT: Add parameters to
./cx env shellto start shell of all containers
- ENHANCEMENT: Autoscaling cannot be configured
./cx env configshould check if the configured images exist before writing
- ENHANCEMENT: Admin contact should be set
./cx env updateshould have an argument to specify another SQL dump
- PROBLEM: Make the port for phpMyAdmin configurable
- PROBLEM: Changing the hostname loses the db volume
./cx env updateshould update GIT first, then Docker, then reconfigure the environment (only if
docker-compose.tplhas changed, ask user if he wants to, db will get dropped), then update the database
./cx env updateassumes that the environment is up but does not check it
./cx env updatedoes not update
Automatically adopt/release existing environments on up/down
Add subcommand "./cx envs locate <env>" to find working directory of an environment (or integrate this in "./cx envs list")
Allow filtering to get all environments that are pointing to the same working directory
- ENHANCEMENT: HTTPS support is missing
- ENHANCEMENT: If socket path is not standard linux config, proxy container will not work
- ENHANCEMENT: Add parameter to debug all of the environment's containers
Add a shortcut for the bash shell of the Ubuntu wrapper container
./cx help fails if a command is specified (request is redirected to PHP)
Commands executed in the wrapper may fail due to missing proxy host directory
- ENHANCEMENT: Add a default user to cloudrexx/ubuntu image (other than root)
- ENHANCEMENT: Add development environment scripts to cloudrexx/ubuntu image
- ENHANCEMENT: If socket path is not standard linux config, proxy container will not work
- INFO: After "./cx env init", cx is deleted. The script automatically re-fetches it on next run. If you choose another branch than "master" (or a repository with a different state than the default one) you may have the wrong version of cx. Additionally, the command shows "error: unable to create file cx: No such file or directory" during execution.
- The reason for this is Windows' behavior with files that are executed. They seem to be locked somehow.
- PROBLEM: There might be a problem if the directory name is long
PROBLEM: The script does not work as MacOS includes Bash version 3 (version 4 is required).
- PROBLEM: The local directory (/root/cx) displayed in message "sudo cp /root/cx /usr/local/bin" is wrong (its taken from the wrapper container instead from the host).
All problems for versions 2, 3 and 4 will exist as well.
- PROBLEM: Untested
All problems for versions 3 and 4 will exist as well. No known problems specific to this version.
All problems for version 4 will exist as well. No known problems specific to this version.
./cx env config does not automatically configure the environment
Port config is missing
Offset path in .htaccess is wrong
- PROBLEM: Caching autoconfig is missing
- PROBLEM: .gitignore is not up to date in branch / .gitignore should be forced for all versions
- PROBLEM: Config might get lost
- PROBLEM: Port configuration might get overwritten.