Skip to content
This repository has been archived by the owner on Feb 5, 2020. It is now read-only.

1.6.7-tectonic.1

Compare
Choose a tag to compare
@amrutac amrutac released this 11 Jul 21:33
· 2765 commits to master since this release

Release tarball is available at https://releases.tectonic.com/tectonic-1.6.7-tectonic.1.tar.gz.
Release signature is available at https://releases.tectonic.com/tectonic-1.6.7-tectonic.1.tar.gz.asc.

Tectonic 1.6.7-tectonic.1 (2017-07-11)

  • Updates to Kubernetes v1.6.7.
  • Update operators are available to all users to power automated operations
  • Reduced flapping of node NotReadystatus
    • Increased controller manager health time out to be greater than the TTL of the load balancer DNS entry
    • Kubernetes default of 40s is below the minimum TTL of 60s for many platforms

Console 1.7.4

  • All tables have sortable columns
  • Removed broken Horizontal Pod Autoscalers UI
  • Adds autocomplete for RBAC binding form dropdowns
  • Adds ability to edit and duplicate RBAC bindings
  • Adds RBAC edit binding roles dropdown filtering by namespace
  • Improved support for valueless labels and annotations

Tectonic Installer

  • Installer will generate all TLS certificates for etcd
  • Terraform tfvars are now pretty-printed

Upgrade Notes - Changes to affinity

When upgrading to Tectonic-1.6.6, we will make two additional changes to kube-scheduler and kube-controller-manager manifests besides bumping their image versions:

  • Change the pod anti-affinity from preferredDuringSchedulingIgnoredDuringExecution to requiredDuringSchedulingIgnoredDuringExecution.
  • Make the deployment replica counts = the number of master nodes.

These changes imply that if there is any master node goes down and never comes back during the upgrade,
the upgrade won't complete because there's not enough nodes to land the pods.

For example, if the number of master nodes is 5, and the kube-controller-manager (KCM) replica is 2,
then during the upgrade, the KCM will be scaled up to 5 replicas. In a normal day, they will be distributed to all master nodes. And on each master node, only 1 of them will be running.

However if a master node goes down due to some reason (as a result it will show up as NotReady in kubectl get nodes), then there will be 1 pod that can't be scheduled due to the pod anti-affinity requirement, so it will get stuck in Pending state and prevent upgrade from proceeding.

Luckily, this doesn't mean upgrading to Tectonic-1.6.6 is more fragile than before, because the DaemonSet rolling upgrade faces the same issue in previous versions when some node goes down. For more information and questions, your support team or the Tectonic Forum.