- Kubernetes 1.11.1 is now the default version when installing Replicated with Kubernetes.
- The Kubernetes support bundle now includes the following additional resources: certificatesigningrequests, clusterrolebindings, clusterroles, controllerrevisions, cronjobs, mutatingwebhookconfigurations, poddisruptionbudgets, rolebindings, roles, validatingwebhookconfigurations, volumeattachments
- replicatedctl cluster node-join-script command has been added to retrieve a script to join a new node to the cluster.
- replicatedctl app status command has been added to retrieve detailed information on the application’s status.
- replicatedctl system status command has been added to retrieve the Replicated system status.
replicatedctl app status inspect has been deprecated in favor of the replicatedctl app status command.
- Kubernetes clusters are now created with Rook 0.8.1 and Weave 2.4.0.
- The Kubernetes installer is now compatible with RHEL 7.5 and CentOS 7.5.
- The Kubernetes installer configures a 100GB Persistent Volume Claim for new airgap installs.
- The Kubernetes installer will prompt the user to disable firewalld when enabled.
- The Kubernetes installer will run kube-proxy in ipvs mode.
- The order in which the custom_metrics YAML property elements are listed is now maintained when rendered to the
storage-aggregations.conf files in the replicated-statsd container.
- Airgapped installations will now respect license update policies set to “automatic”.
- Fixed an issue that could result in the V2 Support Bundle not honoring excluded environment variables when using the is_excluded_from_support parameter.
- Fixed an issue that could prevent the replicated-operator global service from updating on remote worker nodes when upgrading an airgapped installation of Replicated.
- Fixed an issue that could prevent Swarm and Kubernetes from pulling images from the local Replicated registry due to a missing certificate in
- Fixed an issue that allowed LDAP anonymous binds when logging into the Admin Console or when performing an Identity API login operation.
- Improved error messaging for failures in replicatedctl script prior to running the specified sub-command.
- Fixed an issue that occasionally prevented the Replicated snapshot controller pod from starting on airgapped installations with Kubernetes.
- Replicated will now fetch the latest V2 Support Bundle specs rather than using the specs embedded in the license when the airgap flag is passed to the Replicated installation script and an online install is attempted.
- Online native installations that passed the airgap flag to the Replicated install script will not use the local registry unless a remote operator is detected.
- Fixed a crash due to an invalid LDAP query used in the Advanced Search user or group option.
- Fixed possible snapshot DB corruption and a crash caused by this corruption.
- Fixed an issue where stopped containers would not be included in the V2 Support Bundle.
- Fixed a bug that caused the custom cert to not be installed in Docker configuration during the initial setup.
- Fixed a bug that prevented docker from pulling images on additional nodes, preventing the application from starting, when custom certificates that did not contain IP SANS were uploaded on initial setup.
- A custom restore script can now be defined in the
backup section of the Replicated YAML to customize the restore process.
- The replicatedctl license-load CLI command is now supported in airgapped mode with the addition of the flag
- The Swarm easy install script will now accept the flag
tls-cert-path= with an absolute path to the trusted CA certificates file on the host.
- A custom certificate uploaded by the end-user will now be used on all ports exposed by Replicated on the host.
- File permissions have been hardened for all Replicated persistent data.
- Improvements have been made to error messaging in the UI when audit log components fail to initialize.
- Primary color buttons in the Replicated Admin Console have been darkened so as not to be mistaken for disabled.
--data flags to the replicatedctl app-config set CLI command will now accept values with commas.
- Test procedures with the
run_on_save property set to true will no longer be evaluated if the underlying config item’s
when condition evaluates to false.
- The replicatedctl support-bundle CLI command for Kubernetes and Swarm will now create a file on the host in the directory
/var/replicated/support-bundles/ as specified in the command output.
- Support has been added for custom AWS endpoints in aws_auth test procedures. This makes it possible to provide test procedures that can validate against internal services such as Minio.
- A support bundle task kubernetes.container-cp has been added that can copy files from containers within Kubernetes pods.
- Replicated on Kubernetes now supports the ability to disable the deployment of the Contour ingress controller. When disabled, additional steps may be required to provide an ingress controller to your application.
- Replicated installer now supports the ability to abort the installation when Device Mapper in loop-lvm mode is configured as Docker’s default storage driver.
- Replicated on Kubernetes now supports the ability to disable terminal clearing upon the completion of the installation.
- Hidden fields can now be exported as part of replicatedctl app-config export with the
- Replicated installer will now assemble a default list of
NO_PROXY hosts and add them to the environment of Docker, as well as the Replicated and Replicated Operator containers. Additional hosts can be added to the default
NO_PROXY list with the
additional-no-proxy flag for the Native, Swarm and Kubernetes schedulers.
- Additional IP addresses have been added to our list of current IP addresses. Although these are not yet active, any documentation that contains a list of our IP addresses should be updated to include these.
- Replicated installer now returns an error and a 404 status code when an invalid route is requested from get.replicated.com. Previously the Replicated 1.2 installation script was rendered.
- Replicated Native and Swarm applications using the V2 support bundle will now collect logs and status information for stopped or crashed containers.
- The size and number of the Audit Log containers that ship alongside Replicated has been reduced.
- The clarity of process names used in the on-prem audit log has been improved. For example, the process that previously appeared in
ps aux as
processor now uses an entrypoint
- The log level of the
replicated-auditlog-cron container has been changed from
- Fixed an issue where files added to a snapshot by a long running custom backup command could be truncated.
- Snapshot restores from the CLI would previously exit with a 404 not found error on the Swarm scheduler.
- All known CVEs with fixes have been patched in all images distributed by Replicated at the time of the release. For more information see this article.
- Replicated now supports Ubuntu Bionic LTS 18.04 on the Native and Swarm schedulers.
- Improvements have been made around surfacing license upload and sync error messages to the Browser UI and CLI.
- Application services with failing tasks will be force restarted when starting an app on the Swarm scheduler.
- Replicated will use the directory
/var/lib/replicated/tmp mounted from the host when extracting airgapped bundles rather than
/tmp inside the Replicated container.
- Replicated Cloud API will now return status code 404 when a license is not found rather than 500.
- Resolved a race condition that may cause the application state to get overwritten, resulting in Replicated starting and stopping the previous application version.
- Updated the container-selinux package from mirror.centos.org from 2.33 to 2.42.
- Suppressed a Docker warning when bypassing the on-prem registry indicating that the image cannot be accessed at the registry address.
- Multi-strategy snapshots support has been added in Replicated Platform, allowing the vendor to configure different snapshot strategies that can include individualized custom commands, backup schedules and remote backup destinations.
- The S3 snapshot backend can now be configured to send the ‘aws:kms’ Server Side Encryption headers.
- File deduplication during snapshotting can now be disabled with the disable_deduplication flag.
- On-premise Docker registry data can be excluded from snapshots using the exclude_registry_data flag.
- Container labels can now be specified on the Replicated Native scheduler.
- Container logs can be excluded from support bundles on the Native scheduler.
- Snapshots on the Swarm scheduler now use the overlay network of the
replicated stack rather than the host network.
- Assignee has been added to the License API integration api
GET /license/v1/license response.
- The option
license.id has been added to the template function LicenseProperty.
- Additional license data has been added to the Support Bundle including assignee, channel name, channel id, multi-channel channels array, expiration time, expiration policy, and whether or not the license is expired.
- The release channel for the repository used to install Docker has been changed from “test” to “stable”.
- Resolved an issue that could cause the Replicated Console to report the wrong application sequence as current on application updates.
- Swarm tasks stuck in the shutdown state will not block application starting and stopping.
- Fixed an issue that caused Replicated upgrades of worker nodes to fail in Swarm airgapped environments.
- Resolved an issue where the License API will fail when numeric license field values are empty.
- Journald logs were missing from the Support Bundle on some linux distributions when the log data directory defaults to
- Resolved a race condition in the Support Bundle that could cause timeouts for commands that run and attach to container output.
- No longer report false error
open ***: is a directory in the Support Bundle error.json file.
- Resolved an issue where restoring a snapshot using
replicatedctl snapshot CLI would result in a 404 error.
- Fixed a bug that caused a crash when applying some PersistentVolumeClaims in the Kubernetes scheduler
- Fixed a bug that blocked application state from updating while starting and stopping.
- Fixed a bug that caused application Persistent Volumes to sometimes be excluded from snapshots with the Kubernetes scheduler.
- Fixed a bug that prevented application Persistent Volumes from being restored with the Kubernetes scheduler.
- Fixed a bug that caused the audit log to become unresponsive after restores with the Swarm scheduler.
- Replicated will now install Docker 17.12.1-ce as it’s default version of Docker for both the Native and Swarm schedulers.
- The Replicated Native scheduler now supports Amazon Linux 2018.03.
- Completed Kubernetes Jobs no longer appear in the Pods table on the Cluster page of the Admin Console.
- Kubernetes DaemonSets and CronJobs are now removed when the application is stopped.
- StatefulSets are are now scaled to 0 replicas when the application is stopped.
- Added YAML for all resources to the support bundle.
- Fixed an issue that prevented applications on Kubernetes from auto-starting after upgrade.
- Improved functionality of the start/stop buttton in the admin console for the Kubernetes scheduler.
- Fixed an issue that caused pods with multiple containers to report 0 ready in the pods table on the Cluster page of the Admin Console.
storage_class param that configures the StorageClass for Replicated PersistentVolumeClaims is now applied to all application PersistentVolumeClaims when using the Kubernetes Scheduler.
- Fixed the logging level in the Retraced processor and cron images to prevent excessive logging.
- Fixed an issue that could sometimes cause the support bundle to be empty when run in a multi-node airgapped Kubernetes environment.
- Fixed an issue that prevented support bundle commands from using the Docker API when using the Kubernetes scheduler.
- Included Retraced logs in the Support Bundle when using the Kubernetes scheduler.
- A file handle leak has been fixed that occured when the application was restarted.
- Replicated Docker images have been updated to address the following CVEs: CVE-2018-7490, CVE-2018-6797, CVE-2018-6798, CVE-2018-6913